FINN Auto - Car Subscription
Server Details
Search and browse cars for all-inclusive subscription on FINN. Prices, specs, and checkout links.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_available_filters discovers fleet metadata, search_vehicles finds vehicles with filters, get_vehicle_details provides specifications for a specific vehicle, and get_subscription_pricing calculates pricing for a configuration. There is no overlap or ambiguity between these functions.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., get_available_filters, search_vehicles). The naming is predictable and readable throughout the set.
With 4 tools, this server is well-scoped for its car subscription domain. Each tool serves a specific, necessary function in the workflow, from discovery to search to details and pricing, without being overly sparse or bloated.
The tool set covers the core workflows for browsing, searching, viewing details, and pricing car subscriptions effectively. A minor gap is the lack of tools for user account management or subscription lifecycle actions (e.g., booking, canceling), but agents can still perform key tasks with the provided tools.
Available Tools
4 toolsget_available_filtersAvailable FiltersARead-onlyIdempotentInspect
Discover what's currently available in FINN's fleet. Returns all brands (with nested models), car types, fuel types, colors, subscription terms, gearshifts, and price/power/range bounds. Use this to answer questions like 'What brands does FINN offer?' or to validate filter values before searching.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world). The description adds valuable context about what data is returned (brands with nested models, various filter categories) and the tool's role in validation workflows, which goes beyond the annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and output, the second provides usage examples. Every sentence adds value without redundancy, and it's front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, rich annotations), the description is nearly complete. It explains the output content and usage scenarios effectively. The only minor gap is the lack of an output schema, but the description compensates by detailing the return structure. For a read-only metadata tool, this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage. The description appropriately doesn't discuss parameters but adds semantic context about the output's structure (e.g., 'brands with nested models'), which is helpful since there's no output schema. It compensates well for the lack of output documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Discover', 'Returns') and resources ('all brands, car types, fuel types, colors, subscription terms, gearshifts, and price/power/range bounds'). It distinguishes from sibling tools by focusing on metadata retrieval rather than pricing, details, or search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when to use this tool ('to answer questions like 'What brands does FINN offer?' or to validate filter values before searching') and implies alternatives by contrasting with sibling tools (e.g., use search_vehicles for actual searches). It gives clear context for usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subscription_pricingSubscription PricingARead-onlyIdempotentInspect
Calculate exact monthly subscription price for a specific vehicle, term, and mileage combination. Returns base price, km add-on, total, and checkout link.
| Name | Required | Description | Default |
|---|---|---|---|
| km_package | Yes | Monthly km package (500, 1000, 1500, 2000, 2500, 3000, 4000, 5000) | |
| vehicle_id | Yes | The vehicle product ID | |
| term_months | Yes | Subscription term in months (1, 6, 9, 12, 13, 18, 24, 36) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context by specifying the return values (base price, km add-on, total, checkout link) and the exact nature of the calculation, enhancing transparency beyond the annotations' safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first clause and efficiently details the return values in the second. Every sentence earns its place by providing essential information without redundancy, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and no output schema, the description is mostly complete. It clearly states the purpose, inputs, and return values. However, it could improve by mentioning any prerequisites (e.g., vehicle availability) or error cases, slightly limiting completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents the parameters (vehicle_id, term_months, km_package) and their constraints. The description adds no additional parameter semantics beyond what the schema provides, such as explaining how these inputs affect the calculation, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Calculate exact monthly subscription price') and the target resource ('for a specific vehicle, term, and mileage combination'), distinguishing it from sibling tools like get_vehicle_details (which retrieves vehicle information) or search_vehicles (which finds vehicles). It precisely defines the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when pricing calculation is needed for a vehicle subscription, but it does not explicitly state when to use this tool versus alternatives like get_vehicle_details (which might include pricing) or provide exclusions. The context is clear but lacks explicit guidance on tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vehicle_detailsVehicle DetailsARead-onlyIdempotentInspect
Get full specifications, equipment, all images, pricing per term, and checkout links for a specific vehicle. Use a vehicle_id from search_vehicles results. IMPORTANT: Always show detail_url and checkout_url as clickable links. The vehicle_id field is an internal API identifier — never display it to users.
| Name | Required | Description | Default |
|---|---|---|---|
| vehicle_id | Yes | The vehicle product ID (e.g. "bmw-ix1-12345-alpinweissuni") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, idempotent, and closed-world behavior. The description adds valuable context beyond annotations: it specifies that vehicle_id is an 'internal API identifier' and provides UI guidance ('Always show detail_url and checkout_url as clickable links'), which are behavioral traits not captured in annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first states purpose and scope, second provides usage instructions, third adds critical behavioral notes. Each sentence earns its place, and information is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with comprehensive annotations (readOnlyHint, idempotentHint, etc.), the description provides excellent context about what data is returned and how to handle it. The main gap is lack of output schema, but the description compensates well by listing return elements (specifications, equipment, images, pricing, links). Could be slightly more complete with pagination or error handling details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full parameter documentation. The description adds minimal semantic context by clarifying that vehicle_id is an 'internal API identifier' and should come from search_vehicles, but doesn't significantly enhance understanding beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and specifies the exact resources: 'full specifications, equipment, all images, pricing per term, and checkout links for a specific vehicle.' It distinguishes from siblings by explicitly mentioning using vehicle_id from search_vehicles results, differentiating it from get_available_filters and get_subscription_pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'for a specific vehicle' and 'Use a vehicle_id from search_vehicles results.' It also provides a clear exclusion: 'never display [vehicle_id] to users,' and implicitly suggests search_vehicles as the prerequisite alternative for obtaining vehicle IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_vehiclesSearch CarsARead-onlyIdempotentInspect
Search available cars on FINN with filters. Returns matching vehicles with prices, images, and links. All filter values use German names (e.g. "Elektro" not "Electric", "Schwarz" not "Black"). IMPORTANT: Always show detail_url as a clickable link for each vehicle. The vehicle_id field is an internal API identifier for get_vehicle_details — never display it to users.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort order: asc/desc by price, availability, or last_added | |
| fuels | No | Fuel types in German (e.g. ["Elektro", "Benzin", "Diesel", "Plug-In-Hybrid"]) | |
| limit | No | Number of results (1-10, default 5) | |
| terms | No | Subscription term lengths in months (available: 1, 6, 12, 18, 24, 36) | |
| brands | No | Car brands (e.g. ["BMW", "Audi", "Mercedes-Benz"]) | |
| colors | No | Colors in German (e.g. ["Schwarz", "Weiß", "Blau"]) | |
| models | No | Car models (e.g. ["iX1", "3er Limousine"]) | |
| cartypes | No | Car types in German (e.g. ["SUV", "Kombi", "Kleinwagen", "Van"]) | |
| has_deals | No | Filter for cars with special deals/discounts | |
| has_hitch | No | Filter for cars with hitch | |
| max_power | No | Maximum power in kW | |
| max_price | No | Maximum monthly price in EUR | |
| min_power | No | Minimum power in kW | |
| min_price | No | Minimum monthly price in EUR | |
| available_to | No | Latest delivery date (YYYY-MM-DD) | |
| max_ev_range | No | Maximum electric range in km | |
| min_ev_range | No | Minimum electric range in km | |
| available_from | No | Earliest delivery date (YYYY-MM-DD) | |
| is_young_driver | No | Filter for cars allowing drivers under 23 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a safe, read-only, idempotent operation (readOnlyHint: true, destructiveHint: false, idempotentHint: true). The description adds valuable behavioral context beyond annotations: it specifies the return format (prices, images, links), language constraints (German names), and display instructions (show detail_url as clickable, hide vehicle_id). It doesn't mention rate limits or auth needs, but provides useful operational guidance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose, language/format rules, and display instructions. Each sentence adds critical information with zero waste, making it easy to parse and front-loaded with essential details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (19 parameters, no output schema), the description is reasonably complete. It explains the tool's purpose, key behavioral traits, and output handling. With annotations covering safety and idempotency, and schema covering all parameters, the main gap is lack of output schema details (e.g., response structure), but the description partially compensates by mentioning return fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 19 parameters. The description adds minimal parameter semantics beyond the schema—it only reinforces that filter values use German names, which is already covered in schema descriptions for fuels and colors. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search available cars on FINN with filters') and resource ('vehicles'), distinguishing it from siblings like get_vehicle_details (which retrieves details for a specific vehicle) and get_available_filters (which lists filter options). It's precise about what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by specifying the domain (FINN cars) and language requirements (German names), and it mentions vehicle_id is for get_vehicle_details, implying when to use that sibling tool. However, it doesn't explicitly state when not to use this tool or compare it to all alternatives like get_subscription_pricing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!