marketplace
Server Details
AI-native used car marketplace. 145K+ vehicles, 4300+ dealers, 13 US states, 20 MCP tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 19 of 19 tools scored. Lowest: 2.6/5.
Each tool has a clearly distinct purpose: session tools manage the shopping process, search tools find cars and dealers, and supporting tools handle details, history, stats, and feedback. There is no ambiguity between tools like cars_search and cars_dealer_inventory since the former searches across all dealers and the latter targets a specific dealer.
All tools follow a consistent prefix 'cars_' and use snake_case. Session-related tools are grouped under 'session_'. The only minor inconsistency is the typo 'dealer' instead of 'dealer' in 'cars_dealer_inventory', which slightly detracts from perfect consistency.
19 tools is slightly on the higher side but well-justified by the domain: the server covers inventory search, dealer lookup, vehicle details, history, stats, feedback, and a comprehensive session management system. Each tool serves a meaningful role, and the count feels appropriate for a full-featured car marketplace.
The tool set covers the entire car shopping workflow: searching, obtaining details and history, creating sessions, comparing vehicles, messaging dealers, scheduling visits, and managing trade-ins. Minor gaps exist (e.g., no tool to remove a car from a session or update session settings), but the core lifecycle is well-supported.
Available Tools
19 toolscars_dealer_inventoryCInspect
Get all vehicles at a specific dealer.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| dealer_id | Yes | Dealer ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It only states 'Get', implying a read operation, but provides no details on side effects, permissions, rate limits, or return behavior. Insufficient for a tool with 3 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise. However, it lacks structure (e.g., no bullet points, no examples). It earns its place but could be slightly expanded to improve clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters (including pagination), no output schema, and siblings with overlapping functionality, the description is too minimal. It does not cover pagination behavior, result limits, or how to interpret 'all vehicles'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 33% (dealer_id has a basic description). The description does not elaborate on limit, offset, or dealer_id's meaning. For low schema coverage, the description should compensate but fails to add value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get all vehicles at a specific dealer', which is a specific verb and resource. However, it does not differentiate from siblings like cars_search or cars_vehicle, which could also retrieve vehicle data. The purpose is clear but lacks distinguishing context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., cars_search). No exclusions or prerequisites provided. The description is purely operational without contextual advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_dealersBInspect
Search for car dealers by city, state, or ZIP code.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | ZIP code or prefix | |
| city | No | City name | |
| limit | No | Max results | |
| state | No | State abbreviation (e.g. FL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fails to disclose behavioral traits like how multiple parameters interact (AND vs OR), default behavior when no parameters are provided, pagination, or rate limits. The simple search description lacks sufficient transparency for safe autonomous invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that immediately conveys the tool's function. It is concise and front-loaded with the key verb and resource, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 4 parameters and no output schema, the description does not clarify the default search range (e.g., national or local) or how results are sorted. It omits essential guidance for an agent to fully understand the tool's capabilities and limitations, especially given the presence of multiple sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, setting a baseline of 3. The description adds minimal value by naming the criteria (city, state, ZIP) but does not explain parameter interactions or validation rules, such as whether 'zip' and 'city' can be used together or if 'state' is required when 'city' is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Search' and the resource 'car dealers', specifying criteria by city, state, or ZIP code. It effectively distinguishes the tool from its sibling 'cars_dealer_inventory', which likely searches inventory of a specific dealer.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'cars_dealer_inventory' or other search tools. There is no mention of exclusions or prerequisites, leaving the agent without contextual selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_feedbackAInspect
Submit feedback about the Cars Rootz service. We actively read every piece of feedback to improve the service. Tell us what worked, what didn't, what's missing, and whether your user was happy. This helps us build a better car shopping experience for everyone.
| Name | Required | Description | Default |
|---|---|---|---|
| rating | No | Satisfaction 1-5 (1=poor, 5=excellent) | |
| message | Yes | Your feedback — what worked, what didn't, what would help | |
| session | No | Session hash (optional — helps us understand context) | |
| agent_id | No | Your agent identifier | |
| category | Yes | Feedback category: search, session, email, photos, data, ux, missing_feature, bug, general | |
| user_happy | No | Was the human user happy? 1=yes, 0=no, omit if unknown |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions that feedback is actively read, implying a safe write operation, but does not disclose side effects, rate limits, or required authentication. The description adds some context but lacks concrete behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with purpose, and every sentence adds value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no annotations, and no output schema, the description covers purpose and content well. It could mention whether there is a confirmation response, but overall it is reasonably complete for a feedback tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the schema already documents all parameters. The description adds general guidance on what kind of feedback to provide but does not add specific parameter-level meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's for submitting feedback about the Cars Rootz service, listing specific topics (what worked, didn't, missing, user happiness). This distinctively separates it from all sibling tools, which focus on search, dealers, sessions, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells when to use the tool (after a car shopping experience) and what to include, but does not explicitly state when not to use it or compare to sibling alternatives. However, the context is clear enough for an agent to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_historyAInspect
Get the full history of a vehicle by VIN — every dealer it appeared at, price changes over time, days on market, whether it was sold and resurfaced elsewhere. Data with origin.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | 17-character Vehicle Identification Number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears the full burden. It describes the output contents but does not disclose any behavioral traits like rate limits, prerequisites (beyond VIN), or side effects. As a read-only tool, the lack of destructive disclosure is acceptable, but more transparency would improve the score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, informative sentence that efficiently lists key data points. While slightly lengthy, it contains only relevant information and no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description thoroughly explains what the tool returns (dealer history, price changes, days on market, sold status). It provides sufficient context for an agent to understand the tool's capability. Adding output format details could elevate it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'vin' is described in the schema as '17-character Vehicle Identification Number', which is clear. The description adds context on how the VIN is used, but since schema coverage is 100%, this dimension score is appropriately above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the full history of a vehicle by VIN, listing specific data points (dealers, price changes, days on market, sold history). This distinctly separates it from siblings like cars_vehicle (single vehicle info) and cars_search (listing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when detailed historical data for a specific VIN is needed, but does not explicitly state when not to use it or mention alternatives. However, the unique purpose makes the context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_searchAInspect
Search used car inventory across all dealers. Filter by make, model, year, price, mileage, location, body type, fuel type, drivetrain. Returns matching vehicles with dealer info.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | Dealer ZIP code prefix | |
| city | No | Dealer city (e.g. Miami) | |
| make | No | Car make (e.g. Toyota, Honda, BMW) | |
| limit | No | Max results (default 20) | |
| model | No | Car model (e.g. Camry, Civic, 3 Series) | |
| state | No | Dealer state (e.g. FL) | |
| offset | No | Pagination offset | |
| year_max | No | Maximum model year | |
| year_min | No | Minimum model year | |
| body_type | No | Body type (Sedan, SUV, Truck, Coupe, Van, etc.) | |
| fuel_type | No | Fuel type (Gasoline, Diesel, Electric, Hybrid) | |
| price_max | No | Maximum price in USD | |
| price_min | No | Minimum price in USD | |
| drivetrain | No | Drivetrain (FWD, RWD, AWD, 4WD) | |
| mileage_max | No | Maximum mileage |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds little beyond the schema; it doesn't disclose pagination behavior, default pagination, or response structure details. Since there are no annotations, a higher burden was expected but not fully met.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with main purpose, no redundant information, every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 15 optional parameters and no output schema, the description covers the core function and return value sufficiently, though it lacks details on pagination and ordering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add additional meaning to any parameters beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches used car inventory across all dealers and lists many filter options. It distinguishes from siblings like cars_dealer_inventory (likely per-dealer) and cars_dealers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use for broad inventory search but does not explicitly contrast with sibling tools or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_add_carBInspect
Add a vehicle to the buyer's shopping session. Include a fit score (1-10) and notes explaining why you recommend it.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | 17-character VIN of vehicle to add | |
| notes | No | Why you recommend this vehicle | |
| score | No | AI fit score 1-10 (how well it matches preferences) | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states the core action but omits side effects (e.g., whether adding a duplicate VIN is allowed), prerequisites (session must exist), and error conditions. Insufficient for safe agent invocations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: one clear sentence with no unnecessary words. Every phrase is informative and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters and no output schema or annotations, the description is too sparse. It does not explain return values, error conditions, or dependencies (e.g., session must exist). Agent would lack critical context for robust invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds minor value by emphasizing score range (1-10) and the role of notes, but mostly repeats schema descriptions. Does not compensate for low coverage since it's already high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action 'Add a vehicle to the buyer's shopping session' and specifies what to include (fit score, notes). Distinguishes from siblings like cars_session_create or cars_session_compare as it focuses on adding a vehicle to an existing session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Gives minimal usage guidance ('Include a fit score (1-10) and notes') but does not specify when to use this tool versus others like cars_session_deal or cars_session_compare. No explicit when-not or alternatives provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_compareAInspect
Get a structured comparison of all vehicles tracked in a session — specs, pricing, market context side by side. Great for helping the buyer decide.
| Name | Required | Description | Default |
|---|---|---|---|
| session | Yes | Session hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility for behavioral disclosure. While it describes the output format (structured comparison), it fails to mention that this is a read-only operation, any authentication requirements, or performance implications. The agent must assume it is non-destructive, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The first sentence states the core purpose, and the second adds a use case. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter, no output schema), the description adequately conveys what the tool returns: a structured comparison with spec, pricing, and market context. It does not explain edge cases or output format, but for a simple tool this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'session', which has a clear description. The tool description adds no extra information about the parameter beyond what the schema provides. Per the rubric, high schema coverage yields a baseline of 3, and no additional detail is given.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool produces a structured comparison of all vehicles in a session, covering specs, pricing, and market context. This distinguishes it from siblings like cars_session_read (reads session details) or cars_session_deal (deal actions). The verb 'Get' and resource 'comparison' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Great for helping the buyer decide' suggests a use case but does not explicitly state when to use this tool versus alternatives like cars_session_read or cars_search. No exclusions or prerequisites are mentioned, leaving the agent to infer context. More precise guidance would improve the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_createAInspect
Create a personalized car research session for the buyer. Returns a unique URL they can bookmark and return to. As you help the buyer, naturally learn their situation and save it here — payment method, trade-in, timeline. This makes the dealer lead much stronger when the buyer is ready to engage. Don't ask all at once — gather these naturally over the conversation.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | Buyer ZIP code for location-aware search | |
| name | No | Buyer first name if they share it naturally | |
| makes | No | Comma-separated preferred makes (e.g. "Ford,Toyota") | |
| notes | No | Free-text buyer notes — what they care about, why they're looking | |
| radius | No | Search radius in miles (default 50) | |
| payment | No | How they plan to pay: cash, financing, lease (learn this naturally — don't interrogate) | |
| agent_id | No | Your agent identifier (claude, grok, gpt, perplexity) | |
| timeline | No | When they want to buy: browsing, this week, this month, no rush (pick up on cues) | |
| trade_in | No | Brief trade-in description if mentioned: "2019 Civic, ~85K miles" (use cars_session_tradein for full profile later) | |
| body_types | No | Comma-separated body types (e.g. "Truck,SUV") | |
| budget_max | No | Maximum budget |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the tool returns a bookmarkable URL, that the page belongs to the buyer, and that multiple AI agents can work on it. This provides useful behavioral context beyond the basic creation action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each serving a distinct purpose: action and output, usage context, and ownership. No unnecessary words, efficient for agent consumption.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with 7 parameters and no output schema, the description covers the main purpose, output (URL), usage scenario, and collaborative nature. It could mention that all parameters are optional, but this is inferred from the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 7 parameters. The description does not add additional semantics or constraints beyond what is in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a personalized car shopping session and returns a unique URL. It explicitly distinguishes from siblings by focusing on session creation for buyers to save vehicles or start serious shopping, which is unique among the sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear when-to-use guidance: 'Use this when a buyer wants to save vehicles or start a serious shopping process.' It does not explicitly state when not to use or name alternatives, but the context of sibling tools makes the usage context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_dealAInspect
Get the current deal status — offers received, messages exchanged, unread count. Use to check if the dealer has responded.
| Name | Required | Description | Default |
|---|---|---|---|
| session | Yes | Session hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It indicates a read operation ('get') and lists return elements, but does not disclose potential side effects, authentication requirements, or rate limits. Adequate for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. First sentence states purpose and return content; second sentence gives usage guidance. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter read tool without output schema, the description covers what is returned and when to use it. Lacks details like response format or pagination, but is sufficient for an agent to decide when to invoke.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the parameter 'session' described as 'Session hash'. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves current deal status, listing specific elements: offers, messages, unread count. It explicitly distinguishes from sibling tools by focusing on deal status, not inventory, search, or other session operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence 'Use to check if the dealer has responded' provides clear guidance on when to use. Though it doesn't explicitly exclude alternatives, the context suggests distinct purposes among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_interestAInspect
Signal buyer interest in a vehicle to the dealer. This sends a professional email to the dealership on the buyer's behalf. Only use when the buyer has explicitly indicated interest.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | VIN of vehicle buyer is interested in | |
| message | No | Optional message to include for the dealer | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It states that an email is sent on the buyer's behalf, which is a key behavioral detail. However, it lacks additional context such as whether the action is reversible, permissions required, or rate limits. The description meets the minimum bar but does not go beyond.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with no wasted words. Every sentence serves a clear purpose: stating the action and providing a usage condition. It is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no output schema, 4 parameters), the description provides adequate context: it explains the action, the mechanism (email), and the trigger condition. It does not detail return values or prerequisites like session validity, but these are implied by the parameter requirements. Overall, it is sufficiently complete for an experienced agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the description does not need to add much. The tool description does not elaborate on parameters beyond the schema, but that is acceptable given the schema's completeness. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: signaling buyer interest via email. It uses specific verbs and resources ('Signal buyer interest', 'sends a professional email'). However, it does not explicitly differentiate from sibling tools like cars_session_deal or cars_session_message, though the context of 'interest' vs. 'deal' or 'message' provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit guidance: 'Only use when the buyer has explicitly indicated interest.' This is clear context for when to invoke the tool. It does not provide exclusions or alternatives, but the condition is sufficiently specific for most use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_messageAInspect
Post a message to the session. Use for buyer questions, AI analysis notes, or responses. The dealer and other AI agents can see these.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Message content | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier | |
| msg_type | No | Type: note, question, answer, offer, alert | note |
| from_role | No | Role: buyer, ai, or system | ai |
| vehicle_vin | No | Optional: which vehicle this is about |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It discloses that messages are visible to dealer and AI agents, but does not mention authentication, rate limits, error handling, or consequences of posting to a non-existent session.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: two sentences with no filler. The key action and use cases are front-loaded, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters (2 required) and no output schema, the description explains the purpose and visibility but lacks details on return values, prerequisites (e.g., session must exist), and error scenarios. It is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters adequately. The description adds marginal value by listing use cases but does not explain parameter semantics beyond what the schema provides, justifying a baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Post a message to the session' with specific use cases (buyer questions, AI analysis notes, responses). While it distinguishes the action, it doesn't explicitly differentiate from sibling tools like 'cars_session_reply', but the use cases provide clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage context: 'Use for buyer questions, AI analysis notes, or responses' and who can see them (dealer and other AI agents). However, it lacks explicit when-not-to-use instructions or comparisons to alternative sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_notifyAInspect
Set the buyer's email notification preference for this session. Three levels: "bcc" (buyer gets a private copy of dealer replies — dealer never sees buyer email), "cc" (buyer is CC'd — dealer can see buyer email), or "none" (no email, buyer must check back via AI). This upgrades the session identity level to "email".
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Notification mode: "bcc" (private), "cc" (visible to dealer), or "none" | bcc |
| Yes | Buyer's email address | ||
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden of behavioral disclosure. It explains the implications of each mode (e.g., 'private copy', 'visible to dealer') and notes that using this tool upgrades the session identity level to 'email'. While it doesn't mention idempotency or reversibility, the key behaviors are clearly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. The first sentence states purpose and options, the second adds the behavioral side effect. Every word serves a purpose, and the key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and no output schema, the description adequately explains what the tool does and its side effects. It covers the main usage scenario but does not specify the return value or error handling. For this context, it is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing baseline clarity. The description adds value by elaborating on the mode meanings ('bcc' as private, 'cc' as visible to dealer) and mentioning the identity upgrade effect, which goes beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sets the buyer's email notification preference for a session, explaining the three levels (bcc, cc, none) and the side effect of upgrading the session identity level. This distinguishes it from sibling tools that handle other session actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (when setting email notification preference) and explains the three modes, but does not explicitly state prerequisites or when not to use it. It provides sufficient context for an AI agent to infer its usage among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_readAInspect
Read the current state of a shopping session — tracked vehicles, messages, preferences, active offers, and which other AI agents are working on it. Use this to understand context before taking action.
| Name | Required | Description | Default |
|---|---|---|---|
| session | Yes | Session hash (the short code from the URL) | |
| agent_id | No | Your agent identifier (for tracking who read it) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations. Description states 'read' implying safety, but doesn't disclose potential read tracking via agent_id or side effects. With zero annotations, more detail on non-destructiveness would improve score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no fluff. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lists key contents (vehicles, messages, preferences, offers, other agents). No output schema, but description compensates with sufficient detail for agent to understand response shape.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema 100% coverage. Description adds value by explaining session as 'short code from URL' and agent_id purpose as 'for tracking who read it', richer than schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Read' and resource 'shopping session' with specific components listed. Distinct from sibling session tools that modify, create, or compare.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using before action: 'Use this to understand context before taking action.' Implicitly distinguishes from write operations but lacks explicit when-not alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_replyAInspect
Send a follow-up message to the dealer continuing the conversation. Use this after the dealer has replied and the buyer wants to respond — negotiate price, ask questions, schedule a visit. The email thread continues naturally.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | VIN of the vehicle being discussed | |
| message | Yes | The follow-up message to send to the dealer | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden for behavioral disclosure. It does not mention authentication requirements, rate limits, side effects (e.g., what happens to the conversation thread), or any constraints beyond the use case. The phrase 'email thread continues naturally' is vague and not informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no redundant information. The core action is front-loaded in the first sentence, and additional context is compactly provided in the subsequent sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain the return value or outcome, but it does not. It also fails to mention prerequisites (e.g., the session must exist). However, for a simple follow-up tool, the description covers the basic usage adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 4 parameters, so baseline is 3. The description adds no additional meaning beyond what the schema already provides; for example, it does not clarify the format of 'message' or the role of 'session' and 'agent_id' further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Send a follow-up message to the dealer continuing the conversation', specifying the verb (send), resource (follow-up message), and context (continuing conversation). It distinguishes from sibling tools like cars_session_message by indicating this is for replies after the dealer has responded.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this after the dealer has replied and the buyer wants to respond', providing clear when-to-use guidance. Lists example actions (negotiate price, ask questions, schedule a visit). Could be improved by mentioning when not to use, but the context is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_tradeinAInspect
Set or update the buyer's trade-in vehicle information. Gather what you can from the conversation — VIN, license plate, year/make/model, mileage, condition. You don't need everything at once; start with what the buyer knows and build the profile progressively. The dealer will make the trade-in offer based on this info. Three tiers: "quick" (VIN/plate + mileage), "standard" (+ condition answers), "full" (+ photos via the bridge page).
| Name | Required | Description | Default |
|---|---|---|---|
| vin | No | Trade-in vehicle VIN (17 chars). If provided, the server decodes year/make/model/trim via NHTSA. | |
| make | No | Make (auto-filled if VIN provided) | |
| trim | No | Trim level | |
| year | No | Model year (auto-filled if VIN provided) | |
| color | No | Exterior color | |
| model | No | Model (auto-filled if VIN provided) | |
| notes | No | Additional notes about the trade-in | |
| plate | No | License plate number (alternative to VIN) | |
| mileage | No | Current odometer reading | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier | |
| accidents | No | Accident history: none, minor, moderate, major, unknown | |
| title_type | No | Title status: clean, salvage, rebuilt, lien | |
| body_damage | No | Body damage: none, minor, moderate, significant | |
| plate_state | No | State the plate is registered in (e.g. FL, TX) | |
| modifications | No | Aftermarket modifications (free text) | |
| tire_condition | No | Tire condition: good, fair, needs_replacement | |
| warning_lights | No | Dashboard warning lights on? 0=no, 1=yes | |
| mechanical_issues | No | Known mechanical issues (free text) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden for behavioral disclosure. It explains that you can call the tool multiple times to build the profile progressively and that the dealer uses this info for an offer. However, it does not clarify idempotency (whether calls merge or replace), validation behavior, or error handling. The mention of photos via a bridge page is ambiguous as the tool does not accept photo parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is 5 sentences and efficiently conveys purpose, strategies, and tiers. It is well-front-loaded with the core action. Slight redundancy in listing parameters that are already in schema, but overall no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (19 parameters, no annotations, no output schema), the description covers the progressive approach and tiers but lacks information about return values, error conditions, and the meaning of 'photos via bridge page'. It adequately sets context for a session-based trade-in flow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters already have descriptions. The description adds value by grouping parameters into tiers (quick: VIN/plate+mileage; standard: +condition; full: +photos via bridge page), guiding progression. It also explains that VIN decoding happens server-side. This goes beyond the schema's individual param descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Set or update the buyer's trade-in vehicle information' with a specific verb and resource. It distinguishes itself from the sibling 'cars_session_tradein_read' by being the write operation. The progressive buildup and three tiers add additional clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use the tool: when gathering trade-in info from conversation. It advises starting with what the buyer knows and building progressively, and outlines three tiers for different completeness levels. However, it does not explicitly state when not to use it or mention alternatives like reading existing info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_tradein_readAInspect
Read the buyer's trade-in vehicle profile. Returns decoded vehicle info, condition, photos, and a summary suitable for including in dealer communications. Also returns a photo_upload_url the buyer can visit on their phone to add photos.
| Name | Required | Description | Default |
|---|---|---|---|
| session | Yes | Session hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it returns a photo_upload_url for buyer to add photos, a behavioral trait not in schema. No annotations provided, so description carries full burden. 'Read' implies nondestructive, but no explicit statement about errors or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states purpose and main returns; second adds a key behavioral detail. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with no output schema, the description lists all key return types: decoded info, condition, photos, summary, and photo_upload_url. Missing error scenarios, but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has one parameter 'session' with description 'Session hash'. The description provides context that session refers to buyer's trade-in, adding meaning beyond the bare schema. Baseline 3 due to 100% coverage, but context pushes to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it reads the buyer's trade-in vehicle profile and lists return content (decoded info, condition, photos, summary). The verb 'read' and resource 'trade-in' are specific, but does not explicitly differentiate from sibling 'cars_session_tradein', which might be a write operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: use when needing trade-in profile details. No explicit when-to-use, when-not, or alternatives provided. The description focuses on output rather than context of use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_session_visitAInspect
Schedule a dealer visit (test drive, purchase, trade-in appraisal). Generates a visit code and QR code the buyer shows at the dealership. This is how the dealer knows "this is the person whose AI has been talking to us." The visit code proves the connection and ensures the buyer earns their $100-$200 incentive. Only use when the buyer explicitly says they want to visit the dealer.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | VIN of the vehicle they want to see | |
| date | No | Preferred date (e.g. "Saturday", "2026-05-10") | |
| time | No | Preferred time (e.g. "morning", "2pm") | |
| notes | No | Buyer notes for the dealer (e.g. "Ask for Mike", "Bringing my wife") | |
| session | Yes | Session hash | |
| agent_id | No | Your agent identifier | |
| visit_type | No | Type: test_drive, purchase, trade_appraisal, general |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It explains the visit code and incentive, but does not mention side effects, authorization needs, or whether the action is reversible. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise paragraph of four sentences, front-loaded with the main purpose. Every sentence adds value with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 7 parameters, no output schema, and no annotations, the description explains the purpose and code generation but omits the return format and whether scheduling is immediate. Adequate for basic use but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add meaningful parameter information beyond the schema; it only provides overall context. No improvement over schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool schedules a dealer visit and generates a code/QR. It specifies the resources and actions (test drive, purchase, etc.) and distinguishes from siblings like inventory or session management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Only use when the buyer explicitly says they want to visit the dealer,' providing clear when-to-use guidance. It lacks explicit when-not-to-use or alternatives but is sufficient for the intended use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_statsAInspect
Get database statistics: total vehicles, dealers, coverage by state, top makes.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It uses 'Get' which implies a read-only operation, but does not explicitly confirm non-destructiveness, authorization needs, or return format. For a simple stats tool, this is minimally adequate but lacks full disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, front-loaded with the core action, and lists the statistics provided. No redundant or unnecessary words; efficient and focused.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and a simple description, the tool is adequately described for its purpose. It lists the statistics included, which is sufficient for an agent to understand what it returns. Slightly more detail on whether data is real-time or aggregated could enhance completeness, but not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the description does not need to add param information. Baseline for 0 parameters is 4, and there is no contradiction or missing info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves database statistics and lists specific items (total vehicles, dealers, coverage by state, top makes). The verb 'Get' and resource 'database statistics' are precise, and it distinguishes from sibling tools (none of which provide statistics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching overall database statistics, but it does not explicitly state when to use this tool vs alternatives, nor provide any exclusions. Since siblings are all specific operations (e.g., dealer inventory, vehicle details), the context makes the purpose clear, but no explicit guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cars_vehicleAInspect
Get full details for a specific vehicle by VIN. Returns specs, price, mileage, photos, recalls, and dealer info.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | 17-character Vehicle Identification Number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses the return contents (specs, price, etc.) but does not mention authentication, rate limits, or side effects. As a read operation, it is straightforward, but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the purpose, input, and output. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is nearly complete. It lists the major return elements. Could mention if authentication is needed or typical context, but it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for 'vin' format. The description adds value by explaining that the VIN is used to fetch full details and summarizing what is returned, going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'full details for a specific vehicle', the input method 'by VIN', and summarizes the output (specs, price, etc.). It distinguishes from siblings like cars_search and cars_dealer_inventory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a VIN and want full details, but does not explicitly state when to use this tool versus siblings like cars_search for searching or cars_history for history. No exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!