Synapze — Financial Intermediary MCP
Server Details
Connect AI agents to licensed financial intermediaries in France: insurance, credit, wealth.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 16 of 16 tools scored.
Most tools have distinct purposes, such as find_broker for broker discovery, get_quote for pricing, and save_lead for lead management. However, some overlap exists between save_document and send_document, as both handle document branding and storage, which could cause confusion about when to use each. Overall, descriptions clarify differences, but minor ambiguities remain.
Tool names follow a consistent verb_noun pattern throughout, such as find_broker, get_quote, save_lead, and log_interaction. This uniformity makes it easy for agents to predict tool functions and navigate the set without confusion. No deviations or mixed conventions are present.
With 16 tools, the set is well-scoped for a financial intermediary server covering broker discovery, quoting, client management, and document handling. Each tool serves a clear purpose, such as bulk_quote for multi-product quotes and get_hot_leads for lead prioritization, ensuring no redundancy or excessive complexity.
The toolset provides comprehensive coverage for insurance brokerage workflows, including broker search, product listing, quoting, client interaction logging, and document management. Minor gaps exist, such as the lack of tools for updating or deleting client records or quotes, but agents can work around these with existing tools like log_interaction and save_custom_quote.
Available Tools
16 toolsbook_appointmentADestructiveInspect
Create a callback alert in the broker's CRM and send a WhatsApp welcome template to the client. Use this when the client wants to be contacted by an advisor after receiving quotes. IMPORTANT: Use the same broker_code from your previous find_broker/get_quote/get_products call. Always collect the client's phone number, first_name, and last_name before calling this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Client's email address | ||
| gender | No | Client's gender: 'M' or 'F' | |
| context | No | Brief context about the client's need (e.g. 'RC Pro quote for architect') | |
| last_name | No | Client's last name | |
| quote_ref | No | Reference from a previous get_quote call | |
| birth_date | No | Client's date of birth in YYYY-MM-DD format | |
| first_name | No | Client's first name | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| postal_code | No | Client's postal code (e.g. '75001') | |
| client_phone | Yes | Client phone number with country code (e.g. '+33612345678') | |
| preferred_slot | No | Preferred appointment time in ISO 8601 (e.g. '2026-03-26T10:00') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructive/openWorld behavior, and the description adds valuable specifics: it discloses side effects (WhatsApp message sent, CRM notification triggered), return value type ('confirmation status'), and critical data handling policies ('Phone number is transmitted...but never stored').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences efficiently cover: purpose, client-side effect, broker-side effect, return value, and data privacy. Every sentence earns its place with no redundancy or filler; information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the destructive annotations and lack of output schema, the description adequately covers the return type (confirmation status), external system interactions (WhatsApp, CRM), and privacy implications. It could improve by mentioning error scenarios or retry behavior, but it is sufficiently complete for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all five parameters (format, examples, sources). The description adds a privacy note about phone handling but does not significantly expand on parameter semantics beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Book') and clear resource ('appointment'), distinguishing it from sibling tools like get_quote or save_lead by specifying the mechanism ('via WhatsApp') and target ('insurance broker').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies the workflow (booking via WhatsApp), it lacks explicit guidance on when to use this versus save_lead or prerequisites like requiring find_broker first. The schema hints at dependencies, but the description text itself provides no explicit when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulk_quoteARead-onlyInspect
Generate quotes for multiple product types at once with the same client profile. Returns a consolidated matrix of all quotes across product families. Use this when the client needs coverage for multiple products (e.g. sante + prevoyance + gav).
| Name | Required | Description | Default |
|---|---|---|---|
| budget | No | Monthly budget in euros | |
| gender | No | Gender: M or F | |
| regime | No | Social security regime: general, tns, alsace_moselle | |
| children | No | Children details | |
| birth_date | Yes | Client birth date YYYY-MM-DD | |
| has_spouse | No | Whether client has a spouse to cover | |
| postal_code | Yes | French postal code | |
| coverage_zone | No | For sante_internationale | |
| product_types | Yes | Product types to quote (e.g. ['sante', 'gav', 'prevoyance']) | |
| departure_date | No | For sante_internationale | |
| insurance_regime | No | For sante_internationale | |
| spouse_birth_date | No | Spouse birth date YYYY-MM-DD | |
| number_of_children | No | Number of children | |
| destination_country | No | For sante_internationale |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context about the consolidated matrix output format and multi-product scope, but doesn't mention rate limits, authentication needs, or error conditions. With annotations covering safety, a 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence states purpose and output, second provides usage guidance with examples. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and full schema coverage, the description provides excellent purpose and usage context. The main gap is lack of output schema (returns 'consolidated matrix' but format unspecified), preventing a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description doesn't add specific parameter details beyond implying 'product_types' accepts multiple values. Baseline 3 is correct when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Generate quotes') and resource ('multiple product types at once with the same client profile'), specifying it returns 'a consolidated matrix of all quotes across product families'. It distinguishes from siblings like 'get_quote' (singular) by emphasizing bulk/multi-product capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this when the client needs coverage for multiple products' and provides concrete examples ('e.g. sante + prevoyance + gav'), giving clear when-to-use guidance. It implicitly contrasts with single-product quoting tools like 'get_quote'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_coverageARead-onlyInspect
Verify if a specific insurance need is covered by this broker's available products. Returns coverage match score, identified gaps, and recommendations. Useful for comparing what a client needs vs what the broker can offer.
| Name | Required | Description | Default |
|---|---|---|---|
| need | Yes | Description of the insurance need (e.g. 'professional liability for architect firm with 3 employees') | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| current_coverage | No | Description of current coverage if any, to identify gaps |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only safety. The description adds valuable return value disclosure ('coverage match score, identified gaps, and recommendations') compensating for the missing output schema, though it omits rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently structured: first states purpose, second discloses return values. No redundant words; every clause provides necessary information not available in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with simple structure, the description is adequate. It compensates for the missing output schema by describing return content, and annotations cover safety profile. Could mention the find_broker workflow dependency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description references 'insurance need' and 'broker's products' aligning with parameters, but adds no additional semantic details, format constraints, or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Verify') and resource ('insurance need is covered by a broker's products'), distinguishing it from siblings like get_quote (pricing) or find_broker (discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (use when verifying coverage), but lacks explicit guidance on when to use versus alternatives like get_quote, or prerequisites like find_broker (though the schema mentions this).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
download_quote_pdfARead-onlyInspect
Download the official quote PDF from the partner API (e.g., SPVIE). Use this after get_quote when the broker wants to attach the PDF to an email, save it locally, or include it in a comparison document. Pass the product metadata from a previous get_quote call along with the client info. Returns the PDF as base64.
| Name | Required | Description | Default |
|---|---|---|---|
| metadata | No | Additional provider-specific metadata | |
| provider | Yes | Provider name from get_quote results (the 'provider' field) | |
| commission | No | Commission percentage from get_quote metadata | |
| level_code | Yes | Level code from get_quote result | |
| level_name | Yes | Level name from get_quote result | |
| product_id | Yes | Product ID from get_quote result | |
| effect_date | No | Effective date YYYY-MM-DD | |
| client_email | No | Client email | |
| db_niveau_id | No | Database niveau ID from get_quote metadata | |
| product_name | Yes | Product name from get_quote result | |
| client_gender | No | Client gender: 'M' or 'F' | |
| client_street | No | Client street address | |
| db_product_id | No | Database product ID from get_quote metadata | |
| monthly_price | Yes | Monthly price from get_quote result | |
| client_last_name | Yes | Client's last name | |
| client_birth_date | Yes | Client birth date YYYY-MM-DD | |
| client_first_name | Yes | Client's first name | |
| client_postal_code | Yes | Client postal code | |
| partner_product_code | No | Partner product code from get_quote metadata |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the source ('partner API, e.g., SPVIE'), the return format ('PDF as base64'), and the dependency on previous data ('Pass the product metadata from a previous get_quote call'). Annotations cover safety (readOnly, non-destructive) and flexibility (openWorld), but the description provides practical implementation details that help the agent understand how to use it correctly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: the first states the purpose, the second provides usage guidelines, and the third covers parameters and return value. Every sentence adds value without redundancy, making it easy to parse and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (19 parameters, no output schema), the description is complete enough. It explains the tool's purpose, when to use it, data sources, and return format. With annotations covering safety and flexibility, and the schema detailing all parameters, the description fills the gaps by clarifying the workflow dependency and use cases, making it sufficient for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 19 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only implying that parameters come from 'get_quote results' and 'client info.' This meets the baseline of 3 since the schema does the heavy lifting, but the description doesn't provide additional parameter insights like formatting examples or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Download the official quote PDF') and resource ('from the partner API'), distinguishing it from siblings like get_quote (which retrieves quote data) or save_document (which saves documents). It specifies the PDF is for use cases like email attachments or comparisons, making the purpose explicit and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this after get_quote') and for what purposes ('when the broker wants to attach the PDF to an email, save it locally, or include it in a comparison document'). It clearly positions this as a follow-up to get_quote, distinguishing it from other document-related tools like save_document.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_brokerARead-onlyInspect
Trouver un courtier en assurance / Find a licensed insurance broker in France. Recherche par produit (mutuelle, RC Pro, MRH, auto, prévoyance, santé internationale, épargne), ville et langue. Search by product type, city, and language. Returns broker name, specialties, DDA compliance status, and connection endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City or zone in France (e.g. 'paris', 'lyon', 'marseille') | |
| ticket | No | Project amount in EUR (for routing large-ticket cases) | |
| product | Yes | Insurance product type: mrh, rc_pro, sante, sante_internationale (international health/expat/WHV), prevoyance, auto, epargne | |
| language | No | Preferred language: fr, en | fr |
| sub_specialty | No | Sub-specialty niche within the product (e.g. 'architectes', 'sci_is', 'dirigeants') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish readOnlyHint=true (safe search operation), but the description adds crucial behavioral context: it discloses return values (broker name, specialties, availability, connection endpoint) compensating for the missing output schema, and notes implementation details ('DDA-compliant broker matching with V2 specialty scoring') that help the agent understand the matching algorithm quality and compliance constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four high-value sentences: (1) core purpose, (2) searchable dimensions with examples, (3) return payload specification, and (4) compliance/algorithm context. No redundancy; every sentence conveys distinct information necessary for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by enumerating return fields (name, specialties, availability, endpoint). With good annotations covering safety profiles and a well-documented schema, the description provides sufficient domain context (French market, licensing, DDA compliance) for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% description coverage, the description adds semantic value by mapping abstract parameter concepts to concrete examples—expanding 'MRH' to 'home insurance' and 'RC Pro' to 'professional liability,' and grouping parameters logically by search dimension (product type, location, language). This aids agent comprehension beyond the raw schema field names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Find[s] the most relevant licensed insurance broker in France for a given need,' specifying the action (find/match), resource (licensed insurance broker), geographic scope (France), and selection criteria (most relevant). It clearly distinguishes from sibling tools like book_appointment or get_broker_info by focusing on discovery/matching rather than scheduling or retrieving known entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does (searches by product type, location, language) but does not explicitly state when to use this versus siblings like get_broker_info (which likely requires a broker ID) or save_lead. Usage is implied by the description of functionality rather than explicit guidance on alternatives or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_broker_infoARead-onlyInspect
Informations et branding du courtier / Broker branding and identity. Returns: company name, logo URL, brand color (#hex), address, postal code, phone, ORIAS number, website, specialties, and DDA compliance status. ALWAYS call this before generating any document (PDF, PPTX, comparison, advisory note) to brand it with the broker's logo, color, name, address, and ORIAS number.
| Name | Required | Description | Default |
|---|---|---|---|
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the read-only, non-destructive safety profile. The description adds valuable context about what data is returned (the five specific fields), which compensates partially for the missing output schema. However, it does not disclose error handling behavior (e.g., what happens if the broker_code is invalid), caching, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences with zero waste. The first sentence front-loads the purpose and enumerates return fields; the second sentence provides the usage prerequisite. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple single-parameter lookup tool with good annotations, the description is appropriately complete. It compensates for the lack of an output schema by enumerating the specific data fields returned. It could be improved by noting error handling (e.g., 'returns error if broker not found'), but this is not critical for a read-only lookup of this simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'broker_code' but adds no semantic detail beyond what the schema already provides (it repeats that it comes from find_broker, which the schema also states). The examples in the schema ('jmassure', 'protecsia') already provide sufficient context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Get') and resource ('insurance broker'), and explicitly distinguishes from the sibling 'find_broker' by stating this retrieves details for a 'specific' broker using a code from that tool. It lists exact fields returned (address, specialties, opening hours, languages, DDA compliance), eliminating ambiguity about what 'detailed information' means.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states the prerequisite ('Use the broker_code returned by find_broker'), establishing a clear workflow and distinguishing this lookup tool from the search-oriented sibling. However, it lacks an explicit 'when not to use' statement (e.g., 'Do not use for searching by location').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_client_360ARead-onlyInspect
Get the complete 360-degree view of a client: identity, active projects, quotes, recent calls, recent emails, documents, and consent status. Returns everything a broker needs to prepare for a client interaction.
| Name | Required | Description | Default |
|---|---|---|---|
| client_phone | Yes | Client phone number with country code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and data scope. The description adds value by specifying the comprehensive nature of the returned data (identity, projects, quotes, etc.) and the use case for broker preparation, but doesn't disclose additional behavioral traits like rate limits, auth needs, or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds practical context in the second. Both sentences earn their place by clarifying scope and use case without redundancy or unnecessary details, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating multiple data types) and lack of output schema, the description does well by listing the returned data components and the use case. However, it could be more complete by hinting at the output structure or limitations, though annotations cover key behavioral aspects, keeping it from a score of 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'client_phone', with the schema providing a clear description. The description doesn't add any parameter-specific details beyond implying the phone number is used to identify the client, so it meets the baseline of 3 where the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'complete 360-degree view of a client', specifying identity, active projects, quotes, recent calls, recent emails, documents, and consent status. It distinguishes from siblings like get_broker_info or get_quote by emphasizing a comprehensive client overview rather than specific data points.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Returns everything a broker needs to prepare for a client interaction', indicating it's for pre-interaction preparation. However, it doesn't explicitly state when not to use it or name alternatives like get_broker_info for broker-specific details, which would be needed for a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hot_leadsARead-onlyInspect
Get the prioritized list of leads that need attention: pending callbacks, untouched new leads, stale quotes without follow-up. Each lead includes a reason explaining why it's hot. Use this at the start of the day to know who to call first.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Look-back period in days for stale detection. Default: 7 | |
| limit | No | Maximum number of leads to return. Default: 20 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, indicating a safe, bounded read operation. The description adds valuable context beyond this: it explains the prioritization logic (based on callbacks, new leads, stale quotes) and the inclusion of reasons for each lead, which are not covered by annotations. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a usage guideline. Both sentences earn their place by providing essential information without redundancy. It is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description is mostly complete: it clarifies purpose, usage context, and behavioral details. However, it lacks information on output format (e.g., structure of the lead list) and error handling, which could be useful since there is no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for 'days' (look-back period for stale detection) and 'limit' (maximum leads to return). The description does not add meaning beyond the schema, as it mentions no parameters. Baseline score of 3 is appropriate since the schema adequately documents parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'prioritized list of leads', specifying what qualifies as 'hot' (pending callbacks, untouched new leads, stale quotes without follow-up) and that each lead includes a reason. This distinguishes it from siblings like 'get_client_360' or 'get_quote' by focusing on urgency and attention needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when to use it ('at the start of the day to know who to call first'), which helps prioritize usage. However, it does not mention when not to use it or name specific alternatives among siblings, such as 'get_client_360' for detailed lead info or 'log_interaction' for follow-up actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_product_detailsARead-onlyInspect
Get detailed guarantee information for a specific insurance product. Returns coverage tables (hospitalisation, dental, optical, hearing aids, etc.) with reimbursement rates and limits per formula level. Use this after get_quote to show the client exactly what a product covers. The client can then compare two products by calling this tool twice.
| Name | Required | Description | Default |
|---|---|---|---|
| level_name | No | Specific coverage level/formula name (e.g. 'EN3', 'Confort - Formule B', 'AS2'). If omitted, returns the full product sheet with all levels. | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| product_name | Yes | Product name as returned by get_products or get_quote |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context beyond these: it discloses what data structure is returned (coverage tables with reimbursement rates and limits per formula level) and explains the conditional behavior when level_name is omitted (returns full product sheet with all levels).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences with zero waste: sentence 1 states purpose, sentence 2 details return values, sentence 3 provides temporal usage context, and sentence 4 describes a specific usage pattern. Information is front-loaded with the core action immediately stated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a read-only lookup tool with clear annotations and no output schema, the description adequately compensates by detailing the return content (coverage tables, specific guarantee types). It integrates well with the sibling tool ecosystem (referencing get_quote and find_broker implicitly). Minor gap: could clarify error handling or 'not found' behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description adds implicit workflow context referencing 'formula level' (mapping to level_name) and mentioning the get_quote predecessor step, but does not add explicit parameter syntax, validation rules, or semantics beyond what the well-documented schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'detailed guarantee information for a specific insurance product' and distinguishes from siblings by contrasting with get_quote (used 'after get_quote') and implying difference from get_products (detailed vs list view). It also specifies concrete content types like 'hospitalisation, dental, optical' coverage tables.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow sequencing ('Use this after get_quote') and usage pattern ('call this tool twice' for comparison). However, it lacks explicit 'when not to use' guidance or direct comparison to alternatives like check_coverage or get_products that might overlap in functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productsARead-onlyInspect
Catalogue produits assurance / List insurance products with eligibility criteria, coverage details, and indicative pricing. Filter by category: mrh (habitation/home), rc_pro (responsabilité civile/professional liability), sante (mutuelle/health), sante_internationale (expatriés/international health), prevoyance (life), epargne (savings), auto.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Product category filter: mrh, rc_pro, sante, sante_internationale, prevoyance, auto, epargne | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true (safe read). The description adds valuable behavioral context about the payload content—specifying that results include 'eligibility criteria, coverage details, and indicative pricing ranges'—which compensates for the missing output schema and helps the agent assess if this tool satisfies data requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two efficient sentences: the first front-loaded with the action and return value, the second providing the prerequisite. Every word earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately describes the conceptual structure of returned data (criteria, coverage, pricing). It covers the prerequisite workflow and safety profile (via annotations). A minor gap remains regarding pagination behavior or result limits, but it is sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both 'broker_code' (with examples) and 'category' (with enum values). The description reinforces the source of broker_code ('returned by find_broker') but does not add significant semantic meaning beyond the comprehensive schema, meriting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'List[s] insurance products available from a broker' with specific details about returned data (eligibility, coverage, pricing). However, it does not explicitly distinguish this listing tool from the sibling 'get_product_details' (which implies a single-item deep dive vs. this multi-item overview).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit workflow guidance: 'Use the broker_code returned by find_broker,' establishing a clear prerequisite chain with a named sibling tool. It lacks explicit 'when not to use' guidance (e.g., distinguishing from get_product_details for deep dives), but the prerequisite instruction is highly actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_quoteARead-onlyInspect
Tarification assurance en temps réel / Generate real insurance quotes from partner APIs. Devis mutuelle, prévoyance, RC Pro, MRH, auto, santé internationale. Returns monthly prices per product and level. No PII stored or returned.
IMPORTANT — Required fields depend on product_type: • For French products (sante, sante_tns, mrh, rc_pro, prevoyance, auto, gav, pj, emprunteur, etc.): birth_date + postal_code are REQUIRED. Use regime (general/tns/alsace_moselle) for health products. • For sante_internationale (expat/WHV/nomad international health): birth_date + destination_country are REQUIRED. postal_code is NOT needed. Use insurance_regime (premier_euro/complement_cfe/complement_secu/etudiant) instead of regime. Also collect coverage_zone and departure_date.
Use product_data for product-specific information. Call get_products first to see quoteRequirements.guidance for each product — it tells you exactly what to ask the client.
product_data examples by product_type: • auto: {marque, modele, annee, immatriculation, energie, puissance_fiscale, km_annuel, usage, stationnement, date_permis, bonus_malus, sinistres_3ans, formule: tiers/tiers_etendu/tous_risques} • mrh: {type_logement: appartement/maison, statut_occupant: proprietaire/locataire/pno, surface, nb_pieces, etage, annee_construction, alarme, valeur_mobilier} • emprunteur: {montant_pret, duree_pret_mois, taux_pret, type_pret: residence_principale/secondaire/investissement, fumeur, quotite_pct} • rc_pro: {activite, code_naf, nb_salaries, ca_annuel} • gav: {formule: individuelle/famille, seuil_intervention_pct: 5/10/15/30} • per: {revenus_annuels, tmi, versement_initial, versement_mensuel, profil_risque: prudent/equilibre/dynamique} • assurance_vie: {versement_initial, versement_mensuel, profil_risque, horizon_placement_annees}
| Name | Required | Description | Default |
|---|---|---|---|
| budget | No | Client's monthly budget in euros (e.g. 80). Results sorted by proximity to budget. | |
| gender | No | Gender: M or F | |
| regime | No | French social security regime: general, tns, alsace_moselle. For sante/sante_tns only. | |
| children | No | Children details for family coverage | |
| show_all | No | Return ALL quotes instead of top 5. Use only when client asks for more options. | |
| birth_date | Yes | Client birth date in YYYY-MM-DD format (e.g. '1988-05-15') | |
| has_spouse | No | Whether the client has a spouse/partner to cover | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| nationality | No | Client nationality for sante_internationale (e.g. 'France'). Defaults to France. | |
| postal_code | No | French postal code (e.g. '75008', '92150'). Required for French products, not needed for sante_internationale. | |
| product_data | No | Product-specific data object. Fields depend on product_type — see description above for examples per product. Call get_products first to see quoteRequirements.guidance for the exact fields to collect. | |
| product_type | Yes | Product type: sante, sante_tns, sante_internationale, auto, mrh, emprunteur, rc_pro, gav, prevoyance, pj, ij, per, assurance_vie, scolaire, epargne | |
| coverage_zone | No | Coverage zone for sante_internationale: monde_usa, monde_hors_usa, europe, asie_oceanie, ameriques. | |
| departure_date | No | Coverage start date YYYY-MM-DD for sante_internationale. Defaults to 30 days from now. | |
| insurance_regime | No | Insurance regime for sante_internationale: premier_euro, complement_cfe, complement_secu, etudiant. Do NOT confuse with 'regime' (French social security). | |
| spouse_birth_date | No | Spouse birth date in YYYY-MM-DD format | |
| number_of_children | No | Number of children to cover | |
| destination_country | No | Destination country (e.g. 'Canada', 'Australie', 'Thailand'). REQUIRED for sante_internationale. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations: discloses external API dependencies (SPVIE, Néoliane, Alptis), explains return format ('monthly prices per product and coverage level'), and notes user interaction requirements. Despite 'Generate' wording, it aligns with readOnlyHint=true as it describes a calculation/retrieval operation, not state mutation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, all essential: (1) core purpose, (2) critical requirements, (3) workflow prerequisite, (4) return value. Front-loaded with specific action, no redundancy, appropriate density for a 9-parameter financial tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage given no output schema exists: describes return values ('monthly prices per product'), explains external dependencies, and documents user interaction requirements. Sufficient for an AI agent to invoke correctly despite missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds meaning by emphasizing which parameters are critical for 'accurate pricing' and providing workflow context ('Ask the user...'), explaining the business logic behind requiring birth_date and postal_code beyond just schema validation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Generate' with clear resource 'real insurance quotes with actual pricing'. Distinguishes from siblings by specifying partner APIs (SPVIE, Néoliane, Alptis) and actual pricing vs other tools like get_products or find_broker.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisites: 'Requires birth_date... and postal_code' and explicit workflow instruction 'Ask the user for their birth date and postal code before calling'. Lacks explicit 'when not to use' or sibling alternatives, but context is strong enough to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_interactionAInspect
Log a client interaction in the CRM: call, email, WhatsApp, meeting, or note. Records the summary, updates last interaction date, optionally traces RGPD consent, and creates a follow-up reminder if needed. Use this after every client exchange to maintain DDA compliance and CRM accuracy.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Interaction type: call, email, whatsapp, meeting, note | |
| summary | Yes | Summary of the interaction | |
| next_action | No | What needs to happen next | |
| client_phone | Yes | Client phone number with country code | |
| consent_given | No | Set to true if the client gave RGPD consent | |
| duration_minutes | No | Duration in minutes (for calls) | |
| next_action_date | No | When the next action should happen (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, closed-world tool. The description adds valuable behavioral context beyond annotations: it explains that logging updates the last interaction date, optionally traces RGPD consent, and can create follow-up reminders. This clarifies side effects and compliance aspects not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first explains what the tool does, and the second provides usage guidelines. Every phrase adds value (e.g., compliance mention, reminder creation), with no redundant or vague language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool (no readOnlyHint) with no output schema, the description does well by explaining key behavioral outcomes (updates dates, traces consent, creates reminders). It could be more complete by mentioning error conditions or response format, but it covers the essential context given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description mentions 'summary' and 'RGPD consent' (mapping to 'consent_given'), but doesn't add significant semantic value beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Log') and resource ('client interaction in the CRM'), specifying the interaction types (call, email, WhatsApp, meeting, or note). It distinguishes this tool from siblings like 'save_lead' or 'save_document' by focusing on logging interactions rather than creating/updating other entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use this after every client exchange to maintain DDA compliance and CRM accuracy.' It provides clear context (post-interaction logging) and mentions compliance requirements, though it doesn't name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_custom_quoteAInspect
Save a manually-created product offer to a prospect's comparison list in the CRM. Use this when the broker builds a custom offer in Claude (for a product not yet integrated via API, or with specific negotiated pricing). The custom quote appears next to the API-generated quotes in the prospect's file. The prospect must already exist — use save_lead first if needed.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Free-text notes visible in the CRM | |
| coverage | No | List of coverage items | |
| level_name | Yes | Level or formula name | |
| description | No | Short description of the offer | |
| client_phone | Yes | Client phone number with country code to identify the prospect | |
| product_name | Yes | Product name | |
| monthly_price | Yes | Monthly price in euros | |
| provider_name | Yes | Provider/company name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a write operation (readOnlyHint: false) with open-world data (openWorldHint: true) and non-destructive (destructiveHint: false). The description adds valuable context beyond annotations: it explains that the custom quote appears alongside API-generated quotes in the prospect's file, which helps the agent understand the tool's integration behavior. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: the first states the purpose, the second provides usage guidelines, and the third gives a prerequisite. Every sentence adds essential information with zero wasted words, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, write operation) and lack of output schema, the description does well by explaining the tool's purpose, usage context, and prerequisites. However, it doesn't describe the return value or error conditions, which would be helpful for a mutation tool without output schema. Annotations cover the safety profile adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add specific parameter semantics beyond what's in the schema (e.g., it doesn't explain format details for client_phone or constraints for monthly_price). However, it implicitly contextualizes parameters by mentioning they're part of a 'manually-created product offer,' which provides some high-level meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Save a manually-created product offer') and resource ('to a prospect's comparison list in the CRM'), distinguishing it from siblings like save_lead (which creates prospects) or get_quote (which retrieves API-generated quotes). It explicitly mentions this is for custom offers not integrated via API.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when the broker builds a custom offer in Claude for a product not yet integrated via API, or with specific negotiated pricing') and includes a prerequisite ('The prospect must already exist — use save_lead first if needed'), clearly differentiating it from alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_documentAInspect
Save a file (PDF, PPTX, DOCX, etc.) to a client's record in the broker's CRM. Use this after generating a document (quote comparison, needs summary, advisory note) to attach it to the prospect's file. The client must already exist as a lead (use save_lead first). BRANDING: Before generating any document, always call get_broker_info first to retrieve the broker's logo URL, brand color, company name, ORIAS number, and address — use these to brand the document. The file content must be base64-encoded.
| Name | Required | Description | Default |
|---|---|---|---|
| label | No | Description of the document (e.g. 'Comparatif santé internationale Espagne') | |
| file_name | Yes | File name with extension (e.g. 'comparatif-sante.pdf', 'presentation.pptx') | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| client_phone | Yes | Client phone number with country code to identify the prospect | |
| document_type | No | Document type: devis, fiche_conseil, comparatif, autre. Defaults to 'autre'. | |
| content_base64 | Yes | File content encoded in base64 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate write operation (readOnlyHint: false) and non-destructive nature (destructiveHint: false). Description adds critical encoding requirement ('must be base64-encoded') and CRM attachment context not evident in annotations. Does not describe return values or partial failure behavior, hence not a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) core purpose, (2) workflow timing, (3) prerequisite/dependency, (4) technical constraint. Front-loaded with the essential action and resource. No redundant phrases or repetition of schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage of prerequisites, encoding constraints, and sibling dependencies for a 6-parameter file upload tool. Annotations cover safety profile. Lacks description of success response (since no output schema exists), but workflow context is sufficiently complete for agent selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by contextualizing 'content_base64' with the encoding constraint imperative, and connects 'broker_code'/'client_phone' to the CRM workflow. Also provides concrete file type examples (PDF, PPTX) reinforcing 'file_name' parameter intent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Save' with resource 'file' and destination 'client's record in the broker's CRM'. Examples (PDF, PPTX, DOCX) clarify scope. Clearly distinguishes from sibling 'save_lead' (creates leads vs attaches documents) and 'send_document' (transmission vs storage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('after generating a document') and names specific document types (quote comparison, needs summary). Critically, provides prerequisite instruction ('The client must already exist as a lead') and explicitly names sibling tool 'save_lead' as the required predecessor step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_leadAInspect
Save a client's contact information and insurance needs as a lead in the broker's CRM. Use this when a client has received quotes and wants to go further — the broker will contact them to finalize the subscription. IMPORTANT: Use the same broker_code from your previous find_broker/get_quote/get_products call. Always ask the client for their phone number, first name, and last name before calling this tool. Include the quote_ref from a previous get_quote call and any relevant context about their needs.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Client's email address | ||
| gender | No | Client's gender: 'M' or 'F' | |
| context | No | Brief summary of client's needs, budget, preferences (e.g. 'Health insurance, budget 80€/month, needs good dental coverage') | |
| last_name | Yes | Client's last name | |
| quote_ref | No | Reference from a previous get_quote call (e.g. 'MCP-20260326-A1B2C3D4') | |
| birth_date | No | Client's date of birth in YYYY-MM-DD format | |
| first_name | Yes | Client's first name | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| postal_code | No | Client's postal code (e.g. '75001') | |
| client_phone | Yes | Client phone number with country code (e.g. '+33612345678') | |
| product_type | No | Insurance product type: sante, sante_tns, rc_pro, mrh, prevoyance, gav, pj |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive write operation affecting external systems (openWorld: true). The description adds valuable behavioral context not in annotations: that the broker will actively contact the client to finalize subscription, and that user interaction (asking for phone/name) is required pre-invocation. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero redundancy: purpose statement, workflow trigger/outcome, and prerequisite warning (IMPORTANT). Information is front-loaded and each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 11-parameter workflow tool with no output schema, the description successfully explains the multi-step context (find broker → get quote → save lead → broker contacts). It covers prerequisites and side effects. Minor gap: doesn't describe the return value or error states, though annotations indicate the operation is non-destructive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value by explaining data provenance: quote_ref comes from 'previous get_quote call', and the three contact fields must be explicitly asked of the client. This guidance on how to acquire required parameters goes beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Save... as a lead'), the resource ('broker's CRM'), and the content ('contact information and insurance needs'). It effectively distinguishes this from siblings by positioning it as the post-quote conversion step (vs. get_quote for quotes, book_appointment for scheduling, find_broker for discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('when a client has received quotes and wants to go further') and clear workflow prerequisites ('Include the quote_ref from a previous get_quote call'). However, it lacks explicit 'when-not-to-use' guidance or named alternatives (e.g., when to use book_appointment instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_documentADestructiveInspect
Send a document to a client via WhatsApp. Use this after generating a quote comparison, advisory note, or any document the client should receive. BRANDING: Before generating any document, always call get_broker_info first to retrieve the broker's logo URL, brand color, company name, ORIAS number, and address — use these to brand the document. The file content must be base64-encoded. The document is uploaded, then sent via WhatsApp with a caption message. The document is also saved to the client's CRM record automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| label | No | Description for the CRM record | |
| caption | No | Message to accompany the document on WhatsApp (e.g. 'Voici votre comparatif santé internationale') | |
| file_name | Yes | File name with extension (e.g. 'comparatif-sante.pdf') | |
| broker_code | No | Broker code returned by find_broker. Optional in broker-authenticated mode. | |
| client_phone | Yes | Client phone number with country code (WhatsApp recipient) | |
| document_type | No | Document type: devis, fiche_conseil, comparatif, autre. Defaults to 'autre'. | |
| content_base64 | Yes | File content encoded in base64 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructive=true and openWorld=true; the description adds valuable multi-step behavioral context: documents are uploaded, sent via WhatsApp with captions, and automatically saved to CRM. It discloses the base64 encoding requirement and external system interactions (WhatsApp, CRM) that annotations don't specify. Could explicitly mention the destructive/irreversible nature of WhatsApp messaging.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose declaration, usage timing, technical requirement, and process disclosure. Information is front-loaded and each sentence earns its place. No redundant repetition of schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 7-parameter tool with dual external integrations (WhatsApp + CRM) and destructive annotations, the description adequately covers the workflow and side effects (automatic CRM saving). No output schema exists; while it doesn't describe return values, it explains the behavioral outcomes sufficiently for an agent to understand success states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds crucial encoding requirements ('must be base64-encoded') and maps abstract parameters to business concepts (connecting 'caption' to WhatsApp messages and document types to quote comparisons/advisory notes). It provides exemplar values that complement the schema's allowed document types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Send a document to a client via WhatsApp' - specific verb, resource, and delivery method. It clearly distinguishes from sibling save_document (which lacks the WhatsApp sending capability) and get_quote (which generates but doesn't deliver documents), while explaining the dual action of WhatsApp delivery plus CRM archiving.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance: 'Use this after generating a quote comparison, advisory note, or any document the client should receive.' This establishes clear workflow placement. Lacks explicit 'when not to use' or direct comparison to save_document, but the 'after generating' context effectively signals this is for distribution, not storage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!