Skip to main content
Glama

Synapze — Financial Intermediary MCP

Server Details

Connect AI agents to licensed insurance brokers in France via MCP. Quotes, appointments, WhatsApp.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 16 of 16 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries, such as find_broker for discovery, get_quote for pricing, and save_lead for CRM entry. However, some overlap exists: check_coverage and get_product_details both relate to product details, and save_document vs send_document could cause confusion as both handle documents, though send_document adds WhatsApp delivery.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as book_appointment, bulk_quote, and get_client_360. This uniformity makes the set predictable and easy to navigate, with no deviations in style or convention.

Tool Count4/5

With 16 tools, the count is slightly high but reasonable for a financial intermediary server covering broker discovery, quoting, CRM, and document management. It supports comprehensive workflows without being overly bloated, though it borders on the upper limit of ideal scope.

Completeness5/5

The tool set provides complete coverage for the financial intermediary domain, including broker search, product listing, quoting, client management, document handling, and CRM interactions. There are no obvious gaps; tools support end-to-end processes from lead generation to follow-up, ensuring agents can handle all necessary tasks.

Available Tools

16 tools
book_appointmentA
Destructive
Inspect

Create a callback alert in the broker's CRM and send a WhatsApp welcome template to the client. Use this when the client wants to be contacted by an advisor after receiving quotes. IMPORTANT: Use the same broker_code from your previous find_broker/get_quote/get_products call. Always collect the client's phone number, first_name, and last_name before calling this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoClient's email address
genderNoClient's gender: 'M' or 'F'
contextNoBrief context about the client's need (e.g. 'RC Pro quote for architect')
last_nameNoClient's last name
quote_refNoReference from a previous get_quote call
birth_dateNoClient's date of birth in YYYY-MM-DD format
first_nameNoClient's first name
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
postal_codeNoClient's postal code (e.g. '75001')
client_phoneYesClient phone number with country code (e.g. '+33612345678')
preferred_slotNoPreferred appointment time in ISO 8601 (e.g. '2026-03-26T10:00')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true and readOnlyHint=false; the description adds valuable specifics about what 'destructive' means here (sends WhatsApp message, triggers CRM notification). Critically discloses data handling policy (phone transmitted but not stored) that annotations cannot convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences, zero waste. Front-loaded with core purpose (booking via WhatsApp), followed by side effects (client message, CRM notification), return value, and privacy note. Every sentence conveys distinct operational information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter destructive operation with external side effects, description adequately covers channel, notifications, and privacy. Mentions 'confirmation status' return despite lacking output schema. Minor gap: doesn't specify failure modes or idempotency behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear types and examples (e.g., '+33612345678'). Description mentions phone transmission generally but doesn't add syntax, validation rules, or semantics beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Book), resource (appointment with insurance broker), and channel (via WhatsApp). Clearly distinguishes from siblings like find_broker (discovery) or get_quote (information retrieval) by focusing on the booking action and communication method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisites through schema descriptions (broker_code from find_broker, quote_ref from get_quote), but description prose lacks explicit when-to-use guidance or differentiation from save_lead. No mention of required sequence or alternative paths.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bulk_quoteA
Read-only
Inspect

Generate quotes for multiple product types at once with the same client profile. Returns a consolidated matrix of all quotes across product families. Use this when the client needs coverage for multiple products (e.g. sante + prevoyance + gav).

ParametersJSON Schema
NameRequiredDescriptionDefault
budgetNoMonthly budget in euros
genderNoGender: M or F
regimeNoSocial security regime: general, tns, alsace_moselle
childrenNoChildren details
birth_dateYesClient birth date YYYY-MM-DD
has_spouseNoWhether client has a spouse to cover
postal_codeYesFrench postal code
coverage_zoneNoFor sante_internationale
product_typesYesProduct types to quote (e.g. ['sante', 'gav', 'prevoyance'])
departure_dateNoFor sante_internationale
insurance_regimeNoFor sante_internationale
spouse_birth_dateNoSpouse birth date YYYY-MM-DD
number_of_childrenNoNumber of children
destination_countryNoFor sante_internationale
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing this as a safe read operation. The description adds valuable behavioral context about the output format ('consolidated matrix'), which isn't covered by annotations. However, it doesn't mention potential limitations like rate limits or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence states purpose and output format, the second provides usage guidance with examples. Every word serves a clear function, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 14 parameters and no output schema, the description provides excellent purpose and usage guidance. However, it doesn't describe the structure of the returned 'consolidated matrix' or potential error conditions, which would be helpful given the complexity. The annotations cover safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 14 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only implying that parameters represent 'client profile' information. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('generate quotes for multiple product types at once'), resource ('with the same client profile'), and output format ('consolidated matrix of all quotes across product families'). It distinguishes from the sibling 'get_quote' tool by emphasizing bulk/multi-product capability versus single-quote generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when the client needs coverage for multiple products') and provides concrete examples of product combinations ('e.g. sante + prevoyance + gav'). This gives clear guidance for choosing this over the single-quote sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_coverageA
Read-only
Inspect

Verify if a specific insurance need is covered by this broker's available products. Returns coverage match score, identified gaps, and recommendations. Useful for comparing what a client needs vs what the broker can offer.

ParametersJSON Schema
NameRequiredDescriptionDefault
needYesDescription of the insurance need (e.g. 'professional liability for architect firm with 3 employees')
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
current_coverageNoDescription of current coverage if any, to identify gaps
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true. Description adds valuable return value disclosure ('coverage match score, identified gaps, and recommendations') compensating for absent output schema. No contradictions with safety annotations. Does not disclose rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. Front-loaded action verb 'Verify'. Second sentence productively discloses return values. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter read-only tool. Description compensates well for missing output schema by detailing return values (score, gaps, recommendations). Could improve by explicitly stating prerequisite use of find_broker, though schema hints at this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions (including examples like 'jmassure' and cross-reference to find_broker). Description references 'insurance need' and 'broker's products' which map to parameters, but adds no syntax, format, or semantic details beyond what the schema already provides. Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Verify' with clear resource 'insurance need' and scope 'broker's products'. Effectively distinguishes from sibling get_products (which lists products) by focusing on coverage verification against specific needs rather than product enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives guidance. However, the schema parameter description for broker_code references 'find_broker', implying a workflow sequence. Lacks explicit guidance distinguishing it from get_products (browse) vs check_coverage (verify match).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

download_quote_pdfA
Read-only
Inspect

Download the official quote PDF from the partner API (e.g., SPVIE). Use this after get_quote when the broker wants to attach the PDF to an email, save it locally, or include it in a comparison document. Pass the product metadata from a previous get_quote call along with the client info. Returns the PDF as base64.

ParametersJSON Schema
NameRequiredDescriptionDefault
metadataNoAdditional provider-specific metadata
providerYesProvider name from get_quote results (the 'provider' field)
commissionNoCommission percentage from get_quote metadata
level_codeYesLevel code from get_quote result
level_nameYesLevel name from get_quote result
product_idYesProduct ID from get_quote result
effect_dateNoEffective date YYYY-MM-DD
client_emailNoClient email
db_niveau_idNoDatabase niveau ID from get_quote metadata
product_nameYesProduct name from get_quote result
client_genderNoClient gender: 'M' or 'F'
client_streetNoClient street address
db_product_idNoDatabase product ID from get_quote metadata
monthly_priceYesMonthly price from get_quote result
client_last_nameYesClient's last name
client_birth_dateYesClient birth date YYYY-MM-DD
client_first_nameYesClient's first name
client_postal_codeYesClient postal code
partner_product_codeNoPartner product code from get_quote metadata
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it specifies the PDF format ('official quote PDF'), source ('partner API'), and return format ('PDF as base64'). While annotations already indicate readOnlyHint=true and destructiveHint=false, the description provides practical implementation details about the API interaction and output format that aren't covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: three sentences that each earn their place. The first states the purpose, the second provides usage guidelines with concrete examples, and the third covers parameter requirements and return format. No wasted words, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with rich annotations (readOnlyHint, openWorldHint) and 100% schema coverage but no output schema, the description does well by explaining the return format ('PDF as base64') and the prerequisite relationship with get_quote. It could potentially mention error cases or limitations, but covers the essential context given the available structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 19 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only mentioning that parameters should come 'from a previous get_quote call' and include 'client info'. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Download the official quote PDF') and resource ('from the partner API'), and distinguishes it from sibling tools by explicitly mentioning it should be used 'after get_quote'. It provides concrete examples of use cases (attaching to email, saving locally, comparison document).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('after get_quote') and specific scenarios when it's appropriate ('when the broker wants to attach the PDF to an email, save it locally, or include it in a comparison document'). It also mentions the prerequisite of having 'product metadata from a previous get_quote call'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_brokerA
Read-only
Inspect

Trouver un courtier en assurance / Find a licensed insurance broker in France. Recherche par produit (mutuelle, RC Pro, MRH, auto, prévoyance, santé internationale, épargne), ville et langue. Search by product type, city, and language. Returns broker name, specialties, DDA compliance status, and connection endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity or zone in France (e.g. 'paris', 'lyon', 'marseille')
ticketNoProject amount in EUR (for routing large-ticket cases)
productYesInsurance product type: mrh, rc_pro, sante, sante_internationale (international health/expat/WHV), prevoyance, auto, epargne
languageNoPreferred language: fr, enfr
sub_specialtyNoSub-specialty niche within the product (e.g. 'architectes', 'sci_is', 'dirigeants')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the read-only, non-destructive nature. The description adds valuable behavioral context: it discloses return values (broker name, specialties, availability, connection endpoint), regulatory compliance (DDA-compliant), and scoring methodology (V2 specialty scoring) that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly-constructed sentences front-load the core action, followed by search dimensions, return values, and methodology. No redundancy or filler text; every clause provides distinct information about capabilities or constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description appropriately compensates by listing return fields (broker name, specialties, etc.). It covers the matching algorithm and compliance context. Minor gap: it doesn't narratively mention the 'ticket' or 'sub_specialty' parameters, though these are fully documented in the 100%-covered schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds significant semantic value by expanding product type codes to human-readable insurance categories (e.g., 'MRH home insurance', 'RC Pro professional liability'), which helps the agent map user intents to the enum values better than the schema's technical codes alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action ('Find'), resource ('licensed insurance broker'), and scope ('in France'). It distinguishes from siblings like 'get_broker_info' (retrieval) and 'book_appointment' (scheduling) by emphasizing matching ('most relevant') and search dimensions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description establishes clear context for when to use this tool—when seeking a broker match based on product type, location, and language needs. While it doesn't explicitly name sibling alternatives (e.g., 'use get_broker_info if you already have a broker ID'), the 'find' vs. 'get' verb distinction and mention of 'for a given need' provide clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_broker_infoA
Read-only
Inspect

Informations et branding du courtier / Broker branding and identity. Returns: company name, logo URL, brand color (#hex), address, postal code, phone, ORIAS number, website, specialties, and DDA compliance status. ALWAYS call this before generating any document (PDF, PPTX, comparison, advisory note) to brand it with the broker's logo, color, name, address, and ORIAS number.

ParametersJSON Schema
NameRequiredDescriptionDefault
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only safety (readOnlyHint: true). Description adds valuable behavioral context by detailing what specific data fields are returned, compensating for the absence of an output schema. Does not mention error handling for invalid broker codes, but covers the happy path thoroughly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads the value proposition (what info is retrieved), second sentence provides the critical dependency. Every word earns its place with no redundant filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool, description is complete. It compensates for missing output schema by listing returned fields, references the sibling tool for parameter sourcing, and aligns with annotations. No gaps remain that would hinder correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with examples ('jmassure', 'protecsia'). Description references broker_code but largely echoes the schema's explanation that it comes from find_broker. With full schema coverage, baseline score applies; description adds minimal semantic value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get detailed information), identifies resource (insurance broker), and enumerates exact data fields returned (address, specialties, opening hours, languages, DDA compliance). Clearly distinguishes from find_broker by emphasizing it retrieves details for a specific broker rather than searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite workflow: 'Use the broker_code returned by find_broker,' establishing clear sequencing with its sibling tool. Lacks explicit 'when not to use' guidance, but the dependency statement effectively guides proper invocation context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_client_360A
Read-only
Inspect

Get the complete 360-degree view of a client: identity, active projects, quotes, recent calls, recent emails, documents, and consent status. Returns everything a broker needs to prepare for a client interaction.

ParametersJSON Schema
NameRequiredDescriptionDefault
client_phoneYesClient phone number with country code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and data scope. The description adds value by specifying what data is included (e.g., consent status) and the use case for brokers, but does not disclose additional behavioral traits like rate limits, auth needs, or response format details. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences that are front-loaded with the tool's purpose and efficiently detail the scope and use case. Every sentence adds value without redundancy, making it appropriately sized and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating multiple data types) and lack of output schema, the description does well by listing the data components and stating the return purpose. However, it could be more complete by hinting at the response structure or limitations, though annotations provide some context (e.g., openWorldHint=false).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'client_phone', with the schema providing its type and description. The description does not add further meaning beyond implying the phone number identifies the client, so it meets the baseline of 3 where the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'Get' and the resource 'complete 360-degree view of a client', listing specific components like identity, active projects, quotes, calls, emails, documents, and consent status. It clearly distinguishes this from sibling tools by emphasizing it returns 'everything a broker needs' for client interaction preparation, unlike more focused tools like get_quote or get_broker_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'prepare for a client interaction' and 'returns everything a broker needs', implying this tool is for comprehensive client overviews. However, it does not explicitly state when NOT to use it or name specific alternatives (e.g., using get_quote for just quotes), which prevents a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hot_leadsA
Read-only
Inspect

Get the prioritized list of leads that need attention: pending callbacks, untouched new leads, stale quotes without follow-up. Each lead includes a reason explaining why it's hot. Use this at the start of the day to know who to call first.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLook-back period in days for stale detection. Default: 7
limitNoMaximum number of leads to return. Default: 20
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context about what constitutes a 'hot' lead (pending callbacks, untouched new leads, stale quotes) and the inclusion of a reason field, which goes beyond annotations. However, it doesn't mention rate limits, authentication needs, or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every word earns its place, with no redundancy or fluff, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), annotations cover safety and scope well, and the description adds key behavioral context (what makes a lead 'hot', reason inclusion, usage timing). It could benefit from mentioning output format or error handling, but it's largely complete for a read-only query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation of the 'days' parameter for stale detection and 'limit' for maximum returns. The description doesn't add any parameter-specific information beyond what's in the schema, but the schema is comprehensive, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'get' and resource 'prioritized list of leads', specifying it includes 'pending callbacks, untouched new leads, stale quotes without follow-up' and that each lead includes a reason. This distinguishes it from siblings like get_client_360 or get_quote by focusing on urgency/priority rather than general retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use this at the start of the day to know who to call first', providing clear when-to-use guidance. It also implies alternatives by specifying the tool's scope (hot leads only), suggesting other tools for non-priority leads or different actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_product_detailsA
Read-only
Inspect

Get detailed guarantee information for a specific insurance product. Returns coverage tables (hospitalisation, dental, optical, hearing aids, etc.) with reimbursement rates and limits per formula level. Use this after get_quote to show the client exactly what a product covers. The client can then compare two products by calling this tool twice.

ParametersJSON Schema
NameRequiredDescriptionDefault
level_nameNoSpecific coverage level/formula name (e.g. 'EN3', 'Confort - Formule B', 'AS2'). If omitted, returns the full product sheet with all levels.
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
product_nameYesProduct name as returned by get_products or get_quote
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only safety profile; description adds crucial return value context given no output schema exists: 'Returns coverage tables (hospitalisation, dental... etc.) with reimbursement rates and limits.' The 'etc.' aligns with openWorldHint annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences cover: (1) purpose and return structure, (2) usage timing and rationale, (3) comparison workflow. No redundancy; every clause adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately explains return structure (coverage tables with specific domains). Does not mention data format (JSON) or pagination, but sufficient for tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline applies. Description implicitly supports parameter semantics by referencing 'formula level' (level_name) and workflow 'after get_quote' (product_name source), but does not explicitly document parameter meanings beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get detailed guarantee information) and resource (insurance product). Distinguishes from get_products (list) and get_quote (pricing) by focusing on coverage details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states workflow context: 'Use this after get_quote to show the client exactly what a product covers.' Also describes comparison pattern ('calling this tool twice'). Lacks explicit negative guidance (when not to use).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_productsA
Read-only
Inspect

Catalogue produits assurance / List insurance products with eligibility criteria, coverage details, and indicative pricing. Filter by category: mrh (habitation/home), rc_pro (responsabilité civile/professional liability), sante (mutuelle/health), sante_internationale (expatriés/international health), prevoyance (life), epargne (savings), auto.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoProduct category filter: mrh, rc_pro, sante, sante_internationale, prevoyance, auto, epargne
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, non-destructive behavior. The description adds valuable behavioral context by disclosing what data the listing contains (eligibility criteria, coverage details, indicative pricing ranges), effectively substituting for the missing output schema. It does not mention pagination or caching behavior, but covers the essential return value semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and return content, the second states the prerequisite. Every word earns its place; no filler or redundant boilerplate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, 100% schema coverage, good annotations, no output schema), the description is complete. It compensates for the missing output schema by describing the returned content and provides the necessary workflow context to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. The description reinforces the broker_code parameter's provenance ('returned by find_broker'), but this largely mirrors the schema's own description ('Broker code from find_broker'). No additional semantic details (e.g., format constraints, valid category combinations) are added beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and resource ('insurance products available from a broker') and clarifies scope by detailing what information is returned (eligibility criteria, coverage details, pricing ranges). It distinguishes from siblings like find_broker (which finds the broker) and get_product_details (implied singular retrieval) by focusing on listing available products from a specific broker.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool by specifying the prerequisite: 'Use the broker_code returned by find_broker.' This establishes the workflow sequence. However, it lacks explicit 'when-not' guidance or named alternatives (e.g., distinguishing when to use get_product_details instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteA
Read-only
Inspect

Tarification assurance en temps réel / Generate real insurance quotes from partner APIs. Devis mutuelle, prévoyance, RC Pro, MRH, auto, santé internationale. Returns monthly prices per product and level. No PII stored or returned.

IMPORTANT — Required fields depend on product_type: • For French products (sante, sante_tns, mrh, rc_pro, prevoyance, auto, gav, pj, emprunteur, etc.): birth_date + postal_code are REQUIRED. Use regime (general/tns/alsace_moselle) for health products. • For sante_internationale (expat/WHV/nomad international health): birth_date + destination_country are REQUIRED. postal_code is NOT needed. Use insurance_regime (premier_euro/complement_cfe/complement_secu/etudiant) instead of regime. Also collect coverage_zone and departure_date.

Use product_data for product-specific information. Call get_products first to see quoteRequirements.guidance for each product — it tells you exactly what to ask the client.

product_data examples by product_type: • auto: {marque, modele, annee, immatriculation, energie, puissance_fiscale, km_annuel, usage, stationnement, date_permis, bonus_malus, sinistres_3ans, formule: tiers/tiers_etendu/tous_risques} • mrh: {type_logement: appartement/maison, statut_occupant: proprietaire/locataire/pno, surface, nb_pieces, etage, annee_construction, alarme, valeur_mobilier} • emprunteur: {montant_pret, duree_pret_mois, taux_pret, type_pret: residence_principale/secondaire/investissement, fumeur, quotite_pct} • rc_pro: {activite, code_naf, nb_salaries, ca_annuel} • gav: {formule: individuelle/famille, seuil_intervention_pct: 5/10/15/30} • per: {revenus_annuels, tmi, versement_initial, versement_mensuel, profil_risque: prudent/equilibre/dynamique} • assurance_vie: {versement_initial, versement_mensuel, profil_risque, horizon_placement_annees}

ParametersJSON Schema
NameRequiredDescriptionDefault
budgetNoClient's monthly budget in euros (e.g. 80). Results sorted by proximity to budget.
genderNoGender: M or F
regimeNoFrench social security regime: general, tns, alsace_moselle. For sante/sante_tns only.
childrenNoChildren details for family coverage
show_allNoReturn ALL quotes instead of top 5. Use only when client asks for more options.
birth_dateYesClient birth date in YYYY-MM-DD format (e.g. '1988-05-15')
has_spouseNoWhether the client has a spouse/partner to cover
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
nationalityNoClient nationality for sante_internationale (e.g. 'France'). Defaults to France.
postal_codeNoFrench postal code (e.g. '75008', '92150'). Required for French products, not needed for sante_internationale.
product_dataNoProduct-specific data object. Fields depend on product_type — see description above for examples per product. Call get_products first to see quoteRequirements.guidance for the exact fields to collect.
product_typeYesProduct type: sante, sante_tns, sante_internationale, auto, mrh, emprunteur, rc_pro, gav, prevoyance, pj, ij, per, assurance_vie, scolaire, epargne
coverage_zoneNoCoverage zone for sante_internationale: monde_usa, monde_hors_usa, europe, asie_oceanie, ameriques.
departure_dateNoCoverage start date YYYY-MM-DD for sante_internationale. Defaults to 30 days from now.
insurance_regimeNoInsurance regime for sante_internationale: premier_euro, complement_cfe, complement_secu, etudiant. Do NOT confuse with 'regime' (French social security).
spouse_birth_dateNoSpouse birth date in YYYY-MM-DD format
number_of_childrenNoNumber of children to cover
destination_countryNoDestination country (e.g. 'Canada', 'Australie', 'Thailand'). REQUIRED for sante_internationale.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations by disclosing specific external data sources (SPVIE, Néoliane, Alptis APIs) and return format ('monthly prices per product and coverage level'). Annotations already confirm read-only, non-destructive nature; description complements this with data provenance and output structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste. Front-loaded with core action (Generate quotes), followed by data source context, prerequisites, and return value. Each sentence earns its place; no redundant or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a quote generation tool. Compensates for missing output_schema by describing return values ('monthly prices per product'). Covers critical behavioral context (external APIs, required user inputs). Minor gap: does not mention error scenarios (e.g., invalid postal codes) or rate limiting from partner APIs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description highlights critical parameters (birth_date, postal_code) and their purpose ('for accurate pricing'), but does not add semantic details beyond what the schema already provides (e.g., no additional context on regime values or broker_code acquisition beyond schema examples).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Generate' with clear resource 'insurance quotes' and scope 'real... with actual pricing from partner APIs (SPVIE, Néoliane, Alptis)'. Distinguishes from siblings like get_products (browsing) or check_coverage (validation) by emphasizing live pricing calculation from specific carriers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisites: 'Requires birth_date... and postal_code for accurate pricing' and provides clear workflow guidance 'Ask the user for their birth date and postal code before calling this tool'. Lacks explicit mention of alternatives (e.g., when to use get_products instead), but provides strong prerequisite context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_interactionAInspect

Log a client interaction in the CRM: call, email, WhatsApp, meeting, or note. Records the summary, updates last interaction date, optionally traces RGPD consent, and creates a follow-up reminder if needed. Use this after every client exchange to maintain DDA compliance and CRM accuracy.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesInteraction type: call, email, whatsapp, meeting, note
summaryYesSummary of the interaction
next_actionNoWhat needs to happen next
client_phoneYesClient phone number with country code
consent_givenNoSet to true if the client gave RGPD consent
duration_minutesNoDuration in minutes (for calls)
next_action_dateNoWhen the next action should happen (YYYY-MM-DD)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-destructive, closed-world tool. The description adds valuable behavioral context beyond annotations: it specifies that logging updates the last interaction date, optionally traces RGPD consent, and creates follow-up reminders if needed. This clarifies side-effects and compliance aspects not covered by annotations, though it doesn't detail error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and key features, followed by usage guidance. Every sentence adds value: the first explains what the tool does and its scope, the second provides clear usage context. There is no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, mutation operation) and lack of output schema, the description is reasonably complete. It covers purpose, usage, and key behavioral traits. However, it doesn't detail return values or error scenarios, which could be useful for an agent. With annotations providing safety context, it's mostly adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal parameter semantics beyond the schema—it mentions RGPD consent and follow-up reminders, which map to 'consent_given' and 'next_action'/'next_action_date' parameters, but doesn't provide additional syntax or format details. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Log') and resource ('client interaction in the CRM'), specifies the interaction types (call, email, WhatsApp, meeting, or note), and distinguishes this tool from siblings by focusing on logging interactions rather than booking appointments, finding brokers, or managing quotes. It provides a specific, actionable purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this after every client exchange to maintain DDA compliance and CRM accuracy.' It provides clear context (after client interactions) and a compliance rationale, though it doesn't explicitly name alternatives among siblings, the guidance is sufficiently directive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_custom_quoteAInspect

Save a manually-created product offer to a prospect's comparison list in the CRM. Use this when the broker builds a custom offer in Claude (for a product not yet integrated via API, or with specific negotiated pricing). The custom quote appears next to the API-generated quotes in the prospect's file. The prospect must already exist — use save_lead first if needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoFree-text notes visible in the CRM
coverageNoList of coverage items
level_nameYesLevel or formula name
descriptionNoShort description of the offer
client_phoneYesClient phone number with country code to identify the prospect
product_nameYesProduct name
monthly_priceYesMonthly price in euros
provider_nameYesProvider/company name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that the custom quote appears alongside API-generated quotes, mentions the prerequisite that 'the prospect must already exist', and clarifies this is for manually-created offers. While annotations cover read/write and destructive aspects, the description provides operational context without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by usage context and prerequisites. Every sentence earns its place with no wasted words, and the structure guides the agent from what the tool does to when/how to use it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no output schema, the description provides excellent context about the operation's purpose, prerequisites, and relationship to other data (appears next to API-generated quotes). It covers the essential 'why' and 'when' despite the schema handling parameter details. The only minor gap is lack of explicit error or success behavior description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 8 parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, but it does provide overall context about what constitutes a 'custom quote' (product not integrated via API, negotiated pricing) which helps frame parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Save a manually-created product offer'), the target resource ('to a prospect's comparison list in the CRM'), and distinguishes it from API-generated quotes by specifying it's for custom offers built in Claude. This differentiates it from sibling tools like save_lead or get_quote.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when the broker builds a custom offer in Claude for a product not yet integrated via API, or with specific negotiated pricing'), when not to use it (implied: for API-generated quotes), and provides a clear alternative prerequisite ('use save_lead first if needed'). It also distinguishes from sibling tools by specifying the custom quote context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_documentAInspect

Save a file (PDF, PPTX, DOCX, etc.) to a client's record in the broker's CRM. Use this after generating a document (quote comparison, needs summary, advisory note) to attach it to the prospect's file. The client must already exist as a lead (use save_lead first). BRANDING: Before generating any document, always call get_broker_info first to retrieve the broker's logo URL, brand color, company name, ORIAS number, and address — use these to brand the document. The file content must be base64-encoded.

ParametersJSON Schema
NameRequiredDescriptionDefault
labelNoDescription of the document (e.g. 'Comparatif santé internationale Espagne')
file_nameYesFile name with extension (e.g. 'comparatif-sante.pdf', 'presentation.pptx')
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
client_phoneYesClient phone number with country code to identify the prospect
document_typeNoDocument type: devis, fiche_conseil, comparatif, autre. Defaults to 'autre'.
content_base64YesFile content encoded in base64
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate a non-destructive write operation (readOnlyHint=false, destructiveHint=false). The description adds critical behavioral context: base64 encoding requirement and the existence dependency on the client record state. Does not address potential failures or idempotency, but covers the key operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each serving a distinct purpose: purpose statement, usage context, prerequisite warning, and technical requirement. No redundancy or filler. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 6-parameter write operation with no output schema, the description adequately covers the critical path (prerequisites, encoding, file types). Missing explicit error handling description (e.g., what happens if client_phone not found), but sufficiently complete for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description mentions file formats (PDF, PPTX, DOCX) which contextually supports the file_name and document_type parameters, and references base64 encoding for content_base64, but does not add semantic detail beyond what the comprehensive schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Save a file') and identifies the exact resource (client's record in the broker's CRM) and supported formats (PDF, PPTX, DOCX, etc.). It distinguishes from sibling tool 'save_lead' by specifying this attaches to existing leads vs creating them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('after generating a document (quote comparison, needs summary, advisory note)') and includes clear prerequisites ('The client must already exist as a lead (use save_lead first)'), directing the agent to the correct sibling tool for the prerequisite step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_leadAInspect

Save a client's contact information and insurance needs as a lead in the broker's CRM. Use this when a client has received quotes and wants to go further — the broker will contact them to finalize the subscription. IMPORTANT: Use the same broker_code from your previous find_broker/get_quote/get_products call. Always ask the client for their phone number, first name, and last name before calling this tool. Include the quote_ref from a previous get_quote call and any relevant context about their needs.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoClient's email address
genderNoClient's gender: 'M' or 'F'
contextNoBrief summary of client's needs, budget, preferences (e.g. 'Health insurance, budget 80€/month, needs good dental coverage')
last_nameYesClient's last name
quote_refNoReference from a previous get_quote call (e.g. 'MCP-20260326-A1B2C3D4')
birth_dateNoClient's date of birth in YYYY-MM-DD format
first_nameYesClient's first name
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
postal_codeNoClient's postal code (e.g. '75001')
client_phoneYesClient phone number with country code (e.g. '+33612345678')
product_typeNoInsurance product type: sante, sante_tns, rc_pro, mrh, prevoyance, gav, pj
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate write operation (readOnlyHint:false) and external effects (openWorldHint:true); the description adds valuable workflow context that 'the broker will contact them to finalize the subscription' and operational constraints ('Always ask the client for their phone number... before calling'). Does not mention idempotency or duplicate handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose definition, usage trigger with workflow outcome, and prerequisites. Front-loaded with the core action, zero waste, and appropriate use of IMPORTANT for critical constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 11-parameter lead creation tool with full schema coverage, the description comprehensively covers the business process (lead capture → broker follow-up), prerequisites, and sibling tool integration without needing to repeat parameter definitions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). The description adds critical operational semantics for required parameters with the IMPORTANT note to ask the client for phone/first/last name before invoking, and reinforces the workflow relationship of quote_ref to the get_quote sibling.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Save') with clear resource ('lead in the broker's CRM') and distinguishes from siblings by specifying it stores 'contact information and insurance needs' rather than retrieving quotes or finding brokers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when a client has received quotes and wants to go further') and references the sibling workflow ('Include the quote_ref from a previous get_quote call'), clearly positioning it in the sequence after get_quote and before broker contact.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_documentA
Destructive
Inspect

Send a document to a client via WhatsApp. Use this after generating a quote comparison, advisory note, or any document the client should receive. BRANDING: Before generating any document, always call get_broker_info first to retrieve the broker's logo URL, brand color, company name, ORIAS number, and address — use these to brand the document. The file content must be base64-encoded. The document is uploaded, then sent via WhatsApp with a caption message. The document is also saved to the client's CRM record automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
labelNoDescription for the CRM record
captionNoMessage to accompany the document on WhatsApp (e.g. 'Voici votre comparatif santé internationale')
file_nameYesFile name with extension (e.g. 'comparatif-sante.pdf')
broker_codeNoBroker code returned by find_broker. Optional in broker-authenticated mode.
client_phoneYesClient phone number with country code (WhatsApp recipient)
document_typeNoDocument type: devis, fiche_conseil, comparatif, autre. Defaults to 'autre'.
content_base64YesFile content encoded in base64
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true; the description adds valuable workflow context explaining the multi-step process (uploaded, then sent via WhatsApp) and critical side effects (automatically saved to CRM). Also discloses the base64 encoding requirement. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences structured logically: purpose → usage context → technical constraint → side effects. Each sentence earns its place with no redundant phrases. Could be slightly tightened but overall well-crafted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the complex multi-system workflow (WhatsApp + CRM integration) and side effects for a tool with no output schema. Missing only error handling or retry behavior documentation, which would be necessary for a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with detailed field documentation. Description mentions base64 encoding requirement and caption purpose, but these largely restate schema details. With complete schema coverage, baseline 3 is appropriate as description doesn't need to compensate for missing parameter docs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states specific action (send), resource (document), mechanism (via WhatsApp), and scope (to a client). The mention of CRM auto-save and WhatsApp delivery effectively distinguishes it from sibling 'save_document'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states trigger condition: 'Use this after generating a quote comparison, advisory note, or any document the client should receive.' Provides clear context for when to invoke, though it could explicitly name 'save_document' as the alternative for non-WhatsApp scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources