Tesla
Server Details
Control your Tesla - wake it, warm it up, unlock and more. Get your developer token at https://Infoseek.ai/mcp. Also requires your own Tesla developer token which is tied to your car/fleet.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 15 of 15 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose with no functional overlap. The three navigation variants are explicitly differentiated by input type (GPS coordinates vs. Place IDs vs. multi-stop waypoints), and vehicle control actions (climate, locks, lights, horn, windows, sunroof) target separate physical systems.
Most tools follow a consistent 'vehicle_<feature>_<action>' pattern (e.g., vehicle_door_lock, vehicle_auto_conditioning_start). Minor deviations exist with verb-first naming for vehicle_flash_lights, vehicle_honk_horn, and vehicle_wake_up, but these remain readable and predictable.
Fifteen tools is an ideal count for this domain—comprehensive enough to cover essential remote vehicle operations (climate, security, navigation, alerts) without bloat. Each tool earns its place; even the three navigation variants serve distinct integration scenarios.
The surface covers the core remote control lifecycle well (wake, status, climate, locks, lights, navigation). Minor gaps exist for EV-specific operations like charging control (start/stop charging, open charge port) and trunk/frunk access, but agents can accomplish primary 'remote control' workflows without these.
Available Tools
15 toolsvehicle_auto_conditioning_startAInspect
Starts the vehicle's climate control system (HVAC). This is the correct final action for 'warm up my car', 'warm up the car', 'turn on climate', or 'start HVAC' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Starts HVAC for this vehicle. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full behavioral disclosure burden. While it clarifies the 'warming up' use case, it fails to mention critical operational requirements (e.g., whether 'vehicle_wake_up' must be called first), idempotency behavior, or duration expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first defines the function; the second provides high-value intent classification examples. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter action tool, the description covers the basic operation and usage contexts. However, given the presence of 'vehicle_wake_up' as a sibling, the omission of whether the vehicle must be awake first leaves a significant operational gap for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'vehicle_id' parameter, establishing the baseline score of 3. The description itself adds no semantic details about the parameter, relying entirely on the schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('Starts') and target resource ('vehicle's climate control system/HVAC'). The term 'HVAC' precisely distinguishes this from sibling tools like door locks or horn controls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit natural language intent mapping ('warm up my car', 'turn on climate'), which helps the LLM recognize when to invoke this tool. However, it lacks explicit contrast with the sibling 'vehicle_auto_conditioning_stop' or mention of prerequisite steps like waking the vehicle.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_auto_conditioning_stopAInspect
Stops the vehicle's climate control system (HVAC). Use for 'turn off climate', 'stop HVAC', or 'turn the climate off' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Stops HVAC for this vehicle. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States the mutation action ('stops') but omits behavioral details like vehicle wake requirements, idempotency (safe to call if already off?), side effects, or return confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with the core action, followed by intent examples that earn their place by aiding LLM routing decisions. Appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the tool's simplicity (1 param, no output schema). Covers purpose and usage contexts. Minor gap: could mention relationship to vehicle_auto_conditioning_start or wake requirements, but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (vehicle_id documented). Description does not discuss the parameter, but baseline is 3 when schema coverage is high per rubric. No additional semantic context provided beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'stops' with resource 'climate control system (HVAC)' and distinguishes from siblings like vehicle_door_lock or vehicle_flash_lights by specifying HVAC domain. The second sentence reinforces with intent examples.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit example intents ('turn off climate', 'stop HVAC') indicating when to use the tool. However, lacks explicit 'when-not-to-use' guidance or mention of the sibling alternative vehicle_auto_conditioning_start for contrast.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_door_lockAInspect
Locks all doors on the selected vehicle. Use for explicit 'lock the car' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Use for explicit lock requests only. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. While 'Locks' implies a state-changing mutation, the description lacks details on prerequisites (e.g., whether vehicle_wake_up is required), idempotency, error conditions, or reversibility that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core action; second sentence specifies the intent pattern for invocation. No redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage and no output schema, the description is nearly sufficient. Minor gap: given the 'vehicle_wake_up' sibling exists, the description could note whether the vehicle must be awake first, but this is a small omission for a simple lock operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The schema description for vehicle_id adds valuable semantic context by referencing the 'vehicle_list' sibling tool as the source. Main description does not discuss parameters, but schema compensates adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Locks') and resource ('all doors on the selected vehicle'). Clearly distinguishes from sibling 'vehicle_door_unlock' through the explicit lock action and 'lock the car' intent phrase.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use for explicit lock the car intents', providing clear positive guidance on when to invoke. However, it does not explicitly name 'vehicle_door_unlock' as the alternative for inverse operations, though this is reasonably inferable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_door_unlockAInspect
Unlocks all doors on the selected vehicle. Use for explicit 'unlock the car' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Use for explicit unlock requests only. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds scope specificity ('all doors' rather than just 'doors'), but lacks operational context about security implications, wake-up prerequisites (given sibling vehicle_wake_up exists), or reversibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, front-loaded with the core action. No redundancy or waste; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter tool with full schema coverage. Covers the essential operation and intent, though could strengthen context by referencing the locking counterpart given the sibling relationship.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with vehicle_id fully documented. The description does not mention the parameter, but with complete schema coverage, the baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Unlocks') with clear resource ('all doors on the selected vehicle'), distinguishing it from siblings like vehicle_door_lock and other vehicle operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('for explicit unlock the car intents'), providing clear intent matching. However, it does not explicitly reference the sibling vehicle_door_lock as an alternative or mention exclusion cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_flash_lightsAInspect
Flashes the vehicle headlights briefly as a visual signal. Use for 'flash the lights' or 'locate my car' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Use to locate/identify a parked car by light flash. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds behavioral trait 'briefly' indicating duration constraint and 'visual signal' clarifying sensory mode. However, lacks operational context: doesn't mention if vehicle must be awake first (relevant given 'vehicle_wake_up' sibling), rate limits, or safety implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence defines the action, second sentence defines the use case/intent. Perfectly front-loaded with no filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple single-parameter action tool with no output schema. Covers the core use case (locating car) and sensory modality. Could be improved by mentioning prerequisite vehicle state given the wake_up sibling exists, but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (vehicle_id fully documented in schema). Description does not explicitly mention the parameter or add syntax/format details beyond schema, but baseline 3 is appropriate given high schema coverage. Schema already indicates ID source is 'vehicle_list'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Flashes') + resource ('vehicle headlights') + scope ('briefly as a visual signal'). Explicitly distinguishes from sibling 'vehicle_honk_horn' by specifying visual vs auditory modality and from other controls by focusing on signaling/locating functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly maps to intents: 'Use for 'flash the lights' or 'locate my car' intents.' Provides clear positive guidance on when to invoke. Lacks explicit exclusions or alternative recommendations (e.g., when to prefer this over honk_horn), but the intent mapping is specific enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_honk_hornAInspect
Honks the horn on the selected vehicle once. Use for explicit 'honk the horn' or 'sound the horn' intents, often to help locate a parked vehicle.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. Use for explicit horn/honk requests only. Do not guess ids. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds critical behavioral detail 'once' indicating single vs continuous operation. However, omits other expected behaviors for this domain: whether vehicle must be awake (relevant given vehicle_wake_up sibling), volume level, or disturbance warnings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence defines the action. Second sentence covers intent matching and use case. Every word earns its place; appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a simple single-action tool with one required parameter and no output schema. Covers action, intent matching, and typical use case. Minor gap: doesn't clarify relationship to vehicle_wake_up (whether honking auto-wakes or requires prior wake), though this is inferable from sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single vehicle_id parameter, including source reference ('from vehicle_list'). Tool description uses 'selected vehicle' implying prior selection but adds no syntax, format, or semantic details beyond what the schema already provides. Baseline 3 appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Honks' with clear resource 'horn on the selected vehicle' and scope 'once'. Distinguishes from siblings like flash_lights by specifying exact acoustic action and typical use case (locating parked vehicle).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'for explicit 'honk the horn' or 'sound the horn' intents' and provides context 'often to help locate a parked vehicle'. Lacks explicit mention of alternatives (e.g., flash_lights for the same use case) or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_listAInspect
List vehicles available to the authenticated Tesla account.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Required operation selector. Always set to `list-vehicles`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| vehicles | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States 'authenticated' hinting at auth requirements but lacks disclosure on rate limiting, caching behavior, or whether this returns real-time vs cached data. Does not clarify read-only safety despite lack of readOnlyHint annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single front-loaded sentence of 7 words with zero redundancy. Appropriate length for a simple listing tool where output schema handles return value documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a low-complexity tool with output schema present, but misses opportunity to mention this is the entry point for vehicle ID discovery needed by all sibling control tools. Would benefit from noting this returns the vehicle inventory.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single const parameter. Description adds no parameter details, but with schema fully documenting the 'action' selector as always set to 'list-vehicles', the baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' paired with resource 'vehicles' and scope 'authenticated Tesla account'. Clearly distinguishes from sibling control tools (lock, unlock, honk, etc.) by indicating this is a retrieval/discovery operation rather than a vehicle command.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or prerequisites stated, though implied by being the sole listing tool among vehicle control siblings. Missing guidance that this should be used first to obtain vehicle IDs required by other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_remote_boomboxAInspect
Plays an external sound through the vehicle's Boombox speaker (if equipped). The sound is selected using a numeric sound_id. Example values include: 0 for a random Boombox sound, and 2000 for a 'locate ping'. Regional restrictions may limit Boombox functionality, and the vehicle must support external speakers.
| Name | Required | Description | Default |
|---|---|---|---|
| sound_id | Yes | Required boombox sound id (for example `0` random, `2000` locate ping). | |
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates equipment requirements (Boombox/external speakers), regional limitations, and selection mechanism (numeric sound_id with concrete examples). It does not disclose idempotency, error behaviors, or rate limits, but covers the critical operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: function definition, operational mechanism with examples, and operational constraints. Every sentence adds distinct value without redundancy. Information is front-loaded with the core action before detailing limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with simple types and no output schema, the description provides adequate completeness. It covers the essential prerequisites (hardware support, regional availability) and usage examples that an agent needs to invoke the tool correctly, though it could benefit from mentioning error states or success indicators.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the sound_id examples (0, 2000) already present in the schema but does not add additional semantic context beyond what the parameter descriptions already provide (e.g., valid ranges, full enumeration of IDs, or detailed explanation of the 'locate ping').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Plays') and clearly identifies the target resource ('external sound through the vehicle's Boombox speaker'). It effectively distinguishes this from sibling tools like 'vehicle_honk_horn' by emphasizing the Boombox/external speaker requirement and equipment constraints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance through equipment prerequisites ('if equipped', 'must support external speakers') and regional restrictions. However, it lacks explicit when-to-use guidance distinguishing it from the similar 'vehicle_honk_horn' sibling or alternative sound-producing methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_status_overviewAInspect
Get a full status snapshot for one vehicle (battery range, charging, location, climate, locks, firmware, alerts). Do not use this to see if the vehicle is awake/online. Do use this for general status, 'view battery', or 'battery and range' intents.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Required operation selector. Always set to `get-vehicle-status`. | |
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list` (for example `12345678901234567`). Do not guess ids. |
Output Schema
| Name | Required | Description |
|---|---|---|
| locked | No | |
| location | No | |
| doors_open | No | |
| odometer_mi | No | |
| windows_open | No | |
| battery_level | Yes | |
| inside_temp_f | No | |
| charging_state | No | |
| outside_temp_f | No | |
| service_alerts | No | |
| battery_range_mi | No | |
| firmware_version | No | |
| tire_pressure_psi | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses what data fields are returned and explicitly excludes awake/online checking as a use case. However, it does not clarify whether invoking this tool wakes the vehicle, potential latency/caching behavior, or rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence covers purpose and return payload; second covers usage constraints. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given presence of output schema, description appropriately focuses on high-level data categories rather than exhaustive return value documentation. Adequately covers selection criteria vs. siblings. Minor gap regarding whether operation wakes vehicle or requires awake state.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear explanations for both action (const value) and vehicle_id (sourcing from vehicle_list). Description text focuses on return value semantics rather than parameters, which is appropriate given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Get') + resource ('full status snapshot for one vehicle') and enumerates exact data categories returned (battery, charging, location, climate, locks, firmware, alerts). Clearly distinguishes from action-oriented siblings like vehicle_door_lock or vehicle_wake_up.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when NOT to use ('Do not use this to see if the vehicle is awake/online') and when TO use ('Do use this for general status, 'view battery', or 'battery and range' intents'). Directly contrasts with vehicle_wake_up sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_sun_roof_controlAInspect
Controls the powered sunroof. Only available on vehicles equipped with an operable sunroof. The vehicle must be stopped and in Park. Valid states include: "vent" (partially opens for ventilation) and "close" (fully closes the sunroof). Some vehicles do not support intermediate or fully open positions.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | Required sunroof target state. `vent` opens for airflow; `close` fully closes. | |
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses critical safety constraints (must be stopped/in Park), hardware requirements (sunroof equipped), and functional limitations (vent/close only). Does not mention rate limits, async behavior, or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste. Front-loaded with main action, followed by hardware prerequisites, safety constraints, valid states, and limitations. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter physical control tool with no annotations and no output schema, the description adequately covers operational constraints and valid inputs. Minor gap: does not describe return values or error behavior (e.g., what happens if vehicle not in Park).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline of 3. The description adds 'partially opens for ventilation' and 'fully closes the sunroof,' which slightly elaborates on the schema's 'opens for airflow' and 'fully closes,' but adds minimal new semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Controls the powered sunroof,' providing a specific verb and resource. It clearly differentiates from siblings (e.g., vehicle_window_control, vehicle_door_lock) by specifying this is for the sunroof only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites: 'Only available on vehicles equipped with an operable sunroof' and 'The vehicle must be stopped and in Park.' Also notes limitations ('Some vehicles do not support intermediate... positions'). Lacks explicit naming of alternative tools for windows/doors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_wake_upAInspect
Wakes up the selected vehicle from sleep so it becomes online and can receive further commands. ALL operations on the vehicle must first ensure it is awake by calling this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and successfully discloses the state change (sleep to online) and prerequisite nature. However, it omits idempotency details, expected latency (how long wake-up takes), or failure modes when the vehicle is unreachable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes function, second establishes mandatory usage order. Front-loaded with critical prerequisite information and appropriate emphasis (*ALL*) to signal importance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter state-transition tool without output schema, the description adequately covers purpose, prerequisites, and vehicle state requirements. Could be improved by mentioning whether the tool is idempotent or typical wake-up duration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (vehicle_id is fully documented as 'Required Tesla vehicle id from `vehicle_list`'). The description refers to 'selected vehicle' but adds no additional semantic detail beyond the schema, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'wakes up' with clear resource 'vehicle' and distinguishes from siblings by explaining the state transition from 'sleep' to 'online'. It clearly identifies this as a state-management tool rather than a direct control action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states '*ALL* operations on the vehicle must first ensure it is awake by calling this tool,' providing mandatory prerequisite guidance that clearly positions this tool as the first step before any sibling vehicle operations (lock, unlock, climate, etc.).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vehicle_window_controlAInspect
Controls the window positions on all four doors simultaneously. Supported actions are "vent" and "close". Vent lowers the windows slightly to allow airflow; close raises them fully. Vehicle must be in Park. Regional restrictions or vehicle configuration may limit this feature.
| Name | Required | Description | Default |
|---|---|---|---|
| command | Yes | Required window command. `vent` cracks windows; `close` closes all windows. | |
| vehicle_id | Yes | Required Tesla vehicle id from `vehicle_list`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively: it explains the physical scope (all four doors), mechanical behavior (vent lowers slightly for airflow; close raises fully), safety constraints (Park requirement), and operational limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero waste: purpose (sentence 1), parameter enumeration (sentence 2), behavioral semantics (sentence 3), and constraints (sentence 4). Information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a safety-critical physical control with no output schema, the description adequately covers operational preconditions, behavioral outcomes, and failure limitations. Could mention error states or idempotency for a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3), but the description adds valuable elaboration beyond the schema's brief 'cracks windows' by explaining vent's purpose (airflow) and the physical mechanism (lowers slightly/raises fully).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Controls') and clear resource ('window positions on all four doors simultaneously'), distinguishing it clearly from siblings like vehicle_sun_roof_control and vehicle_door_lock/unlock.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit preconditions ('Vehicle must be in Park') and limitations ('Regional restrictions or vehicle configuration may limit this feature'), but does not explicitly name alternative tools for single-window control or other ventilation methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!