nhtsa-vehicle-safety-mcp-server
Server Details
Vehicle safety data from NHTSA — recalls, complaints, crash ratings, investigations, VIN decoding.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cyanheads/nhtsa-vehicle-safety-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose with no overlap: VIN decoding, safety ratings, comprehensive safety profiles, vehicle lookup, complaint searches, investigation searches, and recall searches. The descriptions explicitly differentiate their use cases, making misselection unlikely.
All tools follow a consistent 'nhtsa_verb_noun' pattern with snake_case throughout. The verbs (decode, get, lookup, search) are appropriately chosen for their operations, creating a predictable and readable naming scheme.
Seven tools is well-scoped for a vehicle safety server, covering core NHTSA data domains without bloat. Each tool earns its place by addressing a specific aspect of vehicle safety information, from VIN decoding to recalls.
The toolset provides complete coverage for NHTSA vehicle safety data: VIN decoding, safety ratings, comprehensive profiles, vehicle lookup, complaints, investigations, and recalls. This covers the full lifecycle from vehicle identification to safety assessment, with no obvious gaps for the domain.
Available Tools
7 toolsnhtsa_decode_vinNhtsa Decode VinARead-onlyInspect
Decode a Vehicle Identification Number to extract make, model, year, body type, engine, safety equipment, and manufacturing details. Supports single VINs or batch decode (up to 50). Partial VINs accepted — use * for unknown positions.
| Name | Required | Description | Default |
|---|---|---|---|
| vin | Yes | A single 17-character VIN (e.g., "1HGCM82633A004352") or an array of up to 50 VINs for batch decode. Partial VINs accepted — use * for unknown positions. | |
| modelYear | No | Helps resolve ambiguity for pre-1980 VINs or partial VINs. |
Output Schema
| Name | Required | Description |
|---|---|---|
| vehicles | Yes | Decoded vehicle information per VIN |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it specifies batch size limits (up to 50 VINs), supports partial VINs with wildcards, and mentions the types of data returned (make, model, year, etc.). No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and extracted details, the second adds important behavioral details (batch support, partial VINs). Every sentence provides essential information with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, 100% schema coverage, readOnlyHint annotation, and the existence of an output schema, the description provides complete context. It covers purpose, behavioral constraints, and usage patterns without needing to explain return values (handled by output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some context about partial VINs and batch decoding, but doesn't provide additional semantic meaning beyond what's in the schema descriptions. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Decode a Vehicle Identification Number') and lists the extracted details (make, model, year, body type, engine, safety equipment, manufacturing details). It distinguishes from sibling tools by focusing on VIN decoding rather than safety ratings, complaints, investigations, or recalls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to decode VINs for vehicle details) and mentions batch capability and partial VIN support. However, it doesn't explicitly state when NOT to use it or directly compare it to sibling tools like 'nhtsa_lookup_vehicles' which might have overlapping functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_get_safety_ratingsNhtsa Get Safety RatingsARead-onlyInspect
Get NCAP crash test ratings and ADAS feature availability for a vehicle. Use when the user specifically wants crash test stars, rollover risk, or wants to compare safety features across vehicles. NCAP data available from 1990+, best coverage for 2011+.
| Name | Required | Description | Default |
|---|---|---|---|
| make | Yes | Vehicle manufacturer. | |
| model | Yes | Vehicle model. | |
| modelYear | Yes | Model year. NCAP coverage increases significantly for 2011+. | |
| vehicleId | No | Specific NCAP vehicle ID (from prior results). Skips the year/make/model lookup. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ratings | Yes | Safety ratings per vehicle variant |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation readOnlyHint=true already indicates this is a safe read operation. The description adds useful context about NCAP data coverage (1990+, best for 2011+), which helps set expectations, but doesn't disclose other behavioral traits like rate limits, authentication needs, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidance and data coverage details. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, 100% schema coverage, readOnlyHint annotation, and the presence of an output schema (which handles return values), the description provides sufficient context about purpose, usage, and data coverage, making it complete for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional parameter semantics beyond what's in the schema, such as explaining interactions between vehicleId and other parameters, but meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get NCAP crash test ratings and ADAS feature availability') and resource ('for a vehicle'), distinguishing it from siblings like decode_vin or search_recalls by focusing on safety ratings rather than VIN decoding or recall data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('when the user specifically wants crash test stars, rollover risk, or wants to compare safety features across vehicles') and provides temporal context ('NCAP data available from 1990+, best coverage for 2011+'), though it doesn't explicitly mention when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_get_vehicle_safetyNhtsa Get Vehicle SafetyARead-onlyInspect
Get a comprehensive safety profile for a vehicle. Combines NCAP crash test ratings, recalls, and complaint summary into a single response. Use as the default when asked about vehicle safety, reliability, or purchase decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| make | Yes | Vehicle manufacturer (e.g., "Toyota", "Ford"). Case-insensitive. | |
| model | Yes | Vehicle model (e.g., "Camry", "F-150"). Case-insensitive. | |
| modelYear | Yes | Model year (e.g., 2020). |
Output Schema
| Name | Required | Description |
|---|---|---|
| recalls | Yes | All recalls for this vehicle |
| warnings | Yes | Warnings about sections that could not be loaded from NHTSA |
| safetyRatings | Yes | Crash test ratings per vehicle variant (e.g., FWD vs AWD) |
| complaintSummary | Yes | Summary of consumer complaints |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about what data is included (NCAP ratings, recalls, complaints) and that it's a combined response, but doesn't provide additional behavioral details like rate limits, authentication needs, or response format specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that are front-loaded with the core purpose and followed by clear usage guidance. Every word earns its place with no redundancy or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has annotations (readOnlyHint), a complete input schema (100% coverage), and an output schema exists, the description provides exactly what's needed: clear purpose, usage context, and what data to expect, without needing to explain parameters or return values that are already documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters (make, model, modelYear) with descriptions and examples. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get a comprehensive safety profile') and resources ('vehicle'), and explicitly distinguishes it from siblings by mentioning it combines NCAP crash test ratings, recalls, and complaint summaries into a single response, unlike more specialized sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines by stating 'Use as the default when asked about vehicle safety, reliability, or purchase decisions,' which clearly indicates when to prefer this tool over alternatives without needing to list all siblings individually.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_lookup_vehiclesNhtsa Lookup VehiclesARead-onlyInspect
Look up valid makes, models, and vehicle types in NHTSA's database. Use to resolve ambiguous vehicle names, find correct make/model spelling, or discover what models a manufacturer produces.
| Name | Required | Description | Default |
|---|---|---|---|
| make | No | Make name (required for "models" and "vehicle_types"). Partial match supported. | |
| modelYear | No | Filter models to a specific year. Only for "models" operation. | |
| operation | Yes | "makes" (all makes — warning: 12K+ results), "models" (models for a make), "vehicle_types" (types for a make), "manufacturer" (manufacturer details). | |
| manufacturer | No | Manufacturer name or ID (for "manufacturer" operation). Partial match supported. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of results |
| makes | No | Results for "makes" operation |
| models | No | Results for "models" operation |
| operation | Yes | The operation that was performed |
| vehicleTypes | No | Results for "vehicle_types" operation |
| manufacturers | No | Results for "manufacturer" operation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the readOnlyHint annotation, such as the tool's role in resolving ambiguity and discovering manufacturer data. It does not contradict annotations, and it provides practical use cases that enhance understanding of the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists three specific use cases in a single, well-structured sentence. Every part of the description adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations, a rich input schema with full coverage, and an output schema, the description is complete enough. It effectively communicates the tool's purpose and usage without needing to detail parameters or return values, which are covered elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description does not add additional parameter semantics, so it meets the baseline for high schema coverage without compensating further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('look up') and resources ('valid makes, models, and vehicle types'), and it distinguishes itself from siblings by focusing on database lookup rather than VIN decoding, safety ratings, or complaint searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage contexts ('resolve ambiguous vehicle names, find correct make/model spelling, or discover what models a manufacturer produces'), but it does not explicitly state when NOT to use this tool or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_search_complaintsNhtsa Search ComplaintsARead-onlyInspect
Search consumer safety complaints filed with NHTSA for a specific vehicle. Returns a component breakdown and the most recent complaints. Use for common problems, failure patterns, or owner-reported issues.
| Name | Required | Description | Default |
|---|---|---|---|
| make | Yes | Vehicle manufacturer. | |
| model | Yes | Vehicle model. | |
| component | No | Filter to a specific component (uppercase, e.g., "ENGINE", "AIR BAGS", "ELECTRICAL SYSTEM"). Matches within comma-separated component lists. Omit to see all. | |
| modelYear | Yes | Model year. |
Output Schema
| Name | Required | Description |
|---|---|---|
| complaints | Yes | Most recent complaints (up to 50) |
| totalCount | Yes | Total complaints matching criteria |
| componentBreakdown | Yes | Complaints grouped by component, sorted by frequency |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation readOnlyHint=true already indicates this is a safe read operation. The description adds useful behavioral context about what the search returns (component breakdown and recent complaints) and the filtering capability, but doesn't disclose other behavioral traits like rate limits, authentication needs, or pagination behavior. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose and output, and the second provides usage guidance. It's front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint), 100% schema coverage, and the presence of an output schema, the description is complete enough. It covers purpose, usage, and key behavioral aspects without needing to explain parameters or return values in detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal semantic value beyond the schema—it mentions filtering by component and implies the make/model/year are for vehicle specification, but doesn't provide additional syntax or format details. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search consumer safety complaints'), resource ('filed with NHTSA for a specific vehicle'), and output ('component breakdown and the most recent complaints'). It distinguishes from siblings by focusing on complaints rather than VIN decoding, safety ratings, recalls, or investigations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context ('Use for common problems, failure patterns, or owner-reported issues'), which helps guide when to select this tool. However, it doesn't explicitly state when NOT to use it or mention specific alternatives among the sibling tools, though the purpose differentiation implies alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_search_investigationsNhtsa Search InvestigationsARead-onlyInspect
Search NHTSA defect investigations (Preliminary Evaluations, Engineering Analyses, Defect Petitions, Recall Queries). All filters are ANDed — each additional filter narrows results. The make, model, and query filters all search investigation subject/description text (there are no structured make/model fields in the investigations dataset). First query may be slow (~10s) while the investigation index loads; subsequent queries use a cached index.
| Name | Required | Description | Default |
|---|---|---|---|
| make | No | Free-text filter — matches manufacturer name against subject/description text. ANDed with other filters. | |
| limit | No | Max results to return. Default: 20. | |
| model | No | Free-text filter — matches model name against subject/description text. ANDed with other filters. | |
| query | No | Free-text search across investigation subjects and descriptions. | |
| offset | No | Pagination offset. Default: 0. | |
| status | No | Filter by status: "O" (Open), "C" (Closed). | |
| investigationType | No | Filter by type: "PE" (Preliminary Evaluation), "EA" (Engineering Analysis), "DP" (Defect Petition), "RQ" (Recall Query). |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Total matching investigations |
| investigations | Yes | Matching investigations |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it explains that make/model filters search text (not structured fields), notes the AND logic for filters, and warns about initial slow queries (~10s) due to index loading with caching for subsequent calls. This enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by key behavioral details in a logical flow. Each sentence adds value: the first defines the tool, the second explains filter logic, the third clarifies text search nature, and the fourth notes performance considerations. There is no wasted text, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, read-only operation), the description is complete. It covers purpose, usage context, behavioral traits (filter logic, text search, performance), and the existence of an output schema means return values need not be explained. With annotations and schema providing structured details, the description fills in necessary gaps effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description adds some semantics by clarifying that make/model/query filters search text in subject/description and that filters are ANDed, but it does not provide additional syntax or format details beyond what the schema already covers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search NHTSA defect investigations') and resource ('investigations'), listing the exact types (Preliminary Evaluations, Engineering Analyses, Defect Petitions, Recall Queries). It distinguishes from sibling tools like 'nhtsa_search_complaints' and 'nhtsa_search_recalls' by focusing on investigations rather than complaints or recalls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool—for searching defect investigations with free-text filters. It mentions that filters are ANDed, which helps guide usage. However, it does not explicitly state when not to use it or name specific alternatives among siblings, though the tool name implies it's for investigations vs. complaints/recalls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nhtsa_search_recallsNhtsa Search RecallsARead-onlyInspect
Search recall campaigns by vehicle or campaign number. Use for specific recall lookups, recall history for a vehicle, or looking up a known campaign number.
| Name | Required | Description | Default |
|---|---|---|---|
| make | No | Vehicle manufacturer. Required with model and modelYear when not using campaignNumber. | |
| model | No | Vehicle model. Required with make and modelYear. | |
| dateRange | No | Filter recalls by received date. Applied locally since the API lacks native date filtering. | |
| modelYear | No | Model year. Required with make and model. | |
| campaignNumber | No | NHTSA campaign number (e.g., "20V682000"). When provided, returns campaign details. Other params ignored. |
Output Schema
| Name | Required | Description |
|---|---|---|
| recalls | Yes | Matching recall campaigns |
| totalCount | Yes | Total recalls matching criteria |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation already declares readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about the three specific use cases, but doesn't disclose additional behavioral traits like rate limits, authentication requirements, or what happens when parameters conflict (though the schema covers the campaignNumber override). With annotations covering the safety profile, a 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality, and the second sentence provides specific usage contexts. No wasted words, and the information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that annotations cover safety (readOnlyHint), schema coverage is 100% with detailed parameter descriptions, and there's an output schema (not shown but indicated in context signals), the description is complete enough. It provides the essential purpose and usage guidance without needing to repeat what's already documented elsewhere in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions searching 'by vehicle or campaign number' which aligns with the schema's make/model/modelYear vs campaignNumber parameters, but doesn't add meaningful semantic information beyond what the schema provides. Baseline 3 is correct when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('search recall campaigns') and resource ('by vehicle or campaign number'), distinguishing it from siblings like decode_vin or search_complaints. It explicitly mentions three distinct use cases: specific recall lookups, recall history for a vehicle, and looking up a known campaign number.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('for specific recall lookups, recall history for a vehicle, or looking up a known campaign number'), but doesn't explicitly state when NOT to use it or mention specific alternatives among the sibling tools. The guidance is helpful but could be more comprehensive about tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!