DealerMax
Server Details
Italian cross-dealer search: used cars, long-term rentals, dealer directory, knowledge.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 6 of 6 tools scored.
Each tool has a distinct purpose: dealer directory, market knowledge base, vehicle details, technical specs, NLT offers, and used car search. No overlap in functionality.
Tools use a mix of find_, get_, and search_ prefixes, but all follow snake_case. The pattern is understandable, though not perfectly uniform.
Six tools is appropriate for a dealer network and vehicle information platform. The scope is well-scoped without being too heavy or too thin.
The tool surface covers core functions: dealer lookup, vehicle details, specs, used car search, rental offers, and market intel. No obvious gaps for a read-only informational service.
Available Tools
6 toolsfind_dealerAInspect
Directory dealer attivi nel network DealerMax con filtri di ricerca.
Args:
region: Filtra per nome regione italiana ("Lombardia", "Sicilia"),
sigla provincia ("MI", "PA"), nome esteso provincia ("Milano",
"Palermo") o citta ("Cusago", "Buccinasco"). Case-insensitive,
accent-insensitive. Mappa interna risolve le 110 province italiane
nelle 20 regioni amministrative ISTAT.
brand: Filtra dealer che vendono questo brand auto (case-insensitive).
services: Lista servizi dealer (NON ancora supportato — campo non normalizzato in DB).| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | ||
| region | No | ||
| services | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description provides useful behavioral details: region filtering is case-insensitive, accent-insensitive, and maps provinces to regions via an internal map; the services filter is explicitly marked as not yet supported. This goes beyond a simple statement of what the tool does.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and includes a clear 'Args:' section. However, the structure could be improved by front-loading the core purpose more prominently. The current format is adequate but not optimally streamlined for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has three optional parameters, an output schema (not shown), and no annotations, the description covers the parameters well and notes a limitation. It does not discuss response shape or potential pagination, but the presence of an output schema reduces the need. It is nearly complete for this simple search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description fully compensates. It explains the region parameter's accepted formats (region name, province abbreviation, full name, city) and the mapping logic. For brand, it notes case-insensitivity. For services, it warns that it is unsupported. This adds significant value beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Directory dealer attivi nel network DealerMax con filtri di ricerca' (Directory of active dealers in the DealerMax network with search filters). It specifies the parameters (region, brand, services) and differentiates from sibling tools like get_market_intel, get_vehicle_details, etc., which focus on other aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks guidance on when to use this tool versus alternatives. It does not mention use cases, prerequisites, or when not to use it. The only hint is that the services filter is not yet supported, but no comparative advice is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_intelAInspect
Knowledge base automotive italiana platform-wide: guide, glossario, FAQ, news.
Args:
query: Query semantica in italiano (es: "incentivi auto elettriche 2026").
types: Subset di ["guide", "glossary", "faq", "news"]. Default: tutti.
limit: Numero massimo risultati totali (1-30, default 5).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| types | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It only describes input parameters and does not mention if the tool is read-only, required permissions, rate limits, or behavior on empty results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured with a one-line purpose then parameter details. It is concise but the initial sentence could be more natural. No wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description adequately covers purpose and parameters. It lacks mention of pagination or read-only nature, but overall is sufficient for a tool with low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters are clearly described with examples and defaults, adding significant value over the schema which only provides titles and defaults. Schema description coverage is 0%, so the description fully compensates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it covers a knowledge base for automotive Italian platform including guide, glossary, FAQ, and news. It distinguishes from siblings which are for dealers, vehicle details, and offers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives like find_dealer or get_vehicle_details. Usage is implied by the tool's purpose and the semantic query expectation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vehicle_detailsAInspect
Dettaglio completo di un veicolo (specifiche, prezzo, immagini, dealer, podcast).
Args:
vehicle_slug: ID veicolo. Accetta UUID puro (id_auto) o slug-friendly
come "marca-modello-id_auto" (l'ultimo segmento UUID viene estratto).| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns comprehensive vehicle details including dealer and podcast. This is good for a read operation, though it does not mention authentication or rate limits, which are minor omissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: one for purpose and one for parameter details. It is front-loaded and efficient, with no extraneous information. Every sentence serves a clear function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has an output schema (implied), the description lacks no critical information about return values. It covers the single parameter adequately. However, it could mention error cases or the fact that the tool requires a valid vehicle identifier.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema provides only a string type with no description for vehicle_slug (0% coverage). The description compensates by explaining that the parameter accepts a pure UUID or a slug like 'marca-modello-id_auto' and how the last UUID segment is extracted, adding crucial semantic detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool returns 'full details of a vehicle' listing specific components (specifications, price, images, dealer, podcast). This clearly identifies the verb and resource, and distinguishes it from sibling tools like find_dealer or search_vehicles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance on how to format the vehicle_slug parameter (UUID or slug), but does not explicitly state when to use this tool versus alternatives like search_vehicles or get_market_intel. The usage context is implied but not articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vehicle_specsAInspect
Specifiche tecniche pubbliche di veicoli del mercato italiano.
Restituisce dati pubblicamente noti per query come:
- "Quanto e' lunga la Mazda 3?"
- "Quanto consuma una Peugeot 2008 ibrida?"
- "Quale e' la velocita massima di una Tesla Model 3?"
- "Quanti posti ha una Range Rover Evoque?"
Campi inclusi: marca, modello, allestimento, alimentazione, motore (HP/kW,
cilindrata, cilindri, coppia), emissioni CO2, consumi (urbano/extraurbano/medio),
performance (velocita max, accelerazione 0-100), dimensioni (lunghezza,
larghezza, altezza, passo), peso, bagagliaio, porte, posti, segmento, cambio,
trazione, pneumatici, autonomia BEV (per elettrici), ricarica veloce/standard,
pneumatic suspensions, neo-patentati ammessi.
Args:
query: Free-text (es: "Mazda 3 2024", "Peugeot 2008 ibrido"). Cerca su
marca, modello, allestimento, descrizioni motore.
brand: Filtro brand exact case-insensitive (es: "Mazda", "Peugeot").
model: Filtro modello substring (es: "3", "2008", "Model 3").
fuel_type: Filtro alimentazione substring (es: "ibrido", "elettrico",
"benzina", "diesel", "gpl", "metano").
limit: Numero massimo risultati (1-30, default 5). Una query puo'
restituire molti allestimenti dello stesso modello.| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | ||
| limit | No | ||
| model | No | ||
| query | No | ||
| fuel_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It states the data is 'pubblicamente noti' (publicly known), implying read-only, but does not mention authentication, rate limits, or side effects. Behavioral expectation is moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and examples, then parameter details. It is well-structured but somewhat verbose; however, the length is justified by the need to document parameters without schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists many returned fields, compensating for the missing output schema content. It covers use cases well but does not mention empty results or output format details. Given the tool has an output schema, this is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description thoroughly explains each parameter with examples, default behavior, and filtering types (exact, substring). This adds significant value beyond the bare schema, compensating fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns public technical specs of Italian market vehicles, with specific verbs like 'restituisce' and examples. The name 'get_vehicle_specs' matches this. Differentiates from siblings like get_vehicle_details by focusing on specs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides example queries and hints but does not explicitly say when to use this tool versus siblings like get_vehicle_details or search_vehicles, nor when not to use it. Usage context is implied but not fully guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_nlt_offersAInspect
Cross-dealer semantic search su offerte NLT (noleggio lungo termine) italiane.
Args:
query: Query semantica (es: "elettrica city car under 300/mese").
durata_max_mesi: Durata massima contratto in mesi (36, 48, 60).
canone_max: Canone mensile massimo in EUR.
region: Filtra per nome regione ("Lombardia"), sigla provincia ("MI"),
nome esteso provincia ("Milano") o citta del dealer offerente.
Case-insensitive, accent-insensitive.
limit: Numero massimo risultati (1-30, default 10).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| region | No | ||
| canone_max | No | ||
| durata_max_mesi | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the semantic search nature and filtering behavior, but does not mention typical traits like read-only, authentication needs, or result ranking. It is adequate but not comprehensive for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, structured as a docstring with an Args section. Each parameter is on a single line, and the overall purpose is front-loaded. No unnecessary text; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description need not explain return values. It covers all parameters and the search scope. However, it does not mention result ordering or pagination beyond limit. A minor gap prevents a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It fully describes each parameter's meaning, valid values, and constraints (e.g., durata_max_mesi examples, region flexibility, limit range). This adds significant meaning beyond the schema's title/type/default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a cross-dealer semantic search for NLT (long-term rental) offers. The verb 'search' and resource 'NLT offers' are specific. It distinguishes from sibling tools (e.g., search_vehicles searches for vehicles, not rental offers).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use it via the semantic query example ('elettrica city car under 300/mese'), but does not explicitly state when not to use it or how it differs from siblings like search_vehicles or find_dealer. No exclusions or alternative tool references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_vehiclesAInspect
Cross-dealer semantic search su veicoli usati italiani (DealerMax network).
Args:
query: Query semantica in italiano o inglese (es: "SUV ibrido familiare").
region: Filtra per nome regione italiana ("Lombardia", "Toscana"),
sigla provincia ("MI", "FI"), nome esteso provincia ("Milano",
"Firenze") o citta ("Cusago"). Case-insensitive, accent-insensitive.
budget_max: Budget massimo in EUR.
brand: Brand auto case-insensitive (es: "BMW", "Toyota").
fuel_type: Alimentazione (benzina, diesel, ibrida, elettrica, gpl, metano).
limit: Numero massimo risultati (1-30, default 10).| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | ||
| limit | No | ||
| query | Yes | ||
| region | No | ||
| fuel_type | No | ||
| budget_max | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It indicates a search operation but fails to state whether it is read-only, if authentication is required, rate limits, or any side effects. The behavioral transparency is insufficient for an agent to understand all implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear header and 'Args' section, but it is somewhat verbose, mixing Italian and English. It earns its place by explaining each parameter concisely, though a minor trim could improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately focuses on input parameters. However, it lacks usage guidelines and behavioral context. The description is adequate but not fully complete for an agent to use without additional hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description provides detailed semantics for all 6 parameters with concrete examples and formatting notes (e.g., region can be region name, province acronym, or city; case-insensitive). This fully compensates for the lack of schema descriptions and adds high value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Cross-dealer semantic search su veicoli usati italiani (DealerMax network)', which provides a specific verb ('semantic search'), resource ('veicoli usati italiani'), and scope ('cross-dealer'). This distinguishes it from sibling tools like find_dealer or search_nlt_offers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives. It only describes parameters, leaving the agent to infer usage context without mention of comparative scenarios or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!