Mink Viking Experience
Server Details
Real-time booking, 7-currency pricing, and FAQ for Reykjavik's Viking portrait photo studio.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose with no overlap: check_availability for specific date slots, find_next_available for scanning future dates, get_booking_url for generating booking links, get_business_info for operational details, and get_pricing for cost information. The descriptions clearly differentiate their functions, eliminating any risk of misselection.
All tool names follow a consistent verb_noun pattern (e.g., check_availability, find_next_available, get_booking_url, get_business_info, get_pricing), using snake_case throughout. This uniformity makes the tool set predictable and easy to navigate for an agent.
With 5 tools, the set is well-scoped for a booking and information system, covering availability checking, booking facilitation, and business details without being overly sparse or bloated. Each tool serves a necessary function in the booking workflow.
The tool set covers core booking operations (availability, pricing, booking URLs) and business information effectively, with no obvious gaps for the stated purpose. A minor gap is the lack of a tool for actual booking confirmation or payment processing, but the descriptions imply this is handled externally via the generated URL, so agents can work around it.
Available Tools
5 toolscheck_availabilityCheck booking availabilityARead-onlyIdempotentInspect
Check real-time appointment availability for a specific date. Returns available start times in 24-hour HH:MM format, session duration, a pre-filled booking URL, and pricing. An empty slot list means the date is fully booked, not an error. Prices are returned in EUR (base) plus an optional secondary currency of the caller's choice.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Date to check in YYYY-MM-DD format. Must be today or future, within 60 days. | |
| currency | No | Optional ISO 4217 currency code to use as the secondary display currency alongside EUR (e.g. USD, GBP, CAD, AUD, CNY, ISK). Defaults to ISK (Mink's local currency). Invalid codes silently fall back to ISK. The full multi-currency breakdown is always available in the structured response. | |
| participants | No | Number of people attending the session (1-20). Defaults to 1. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that an empty slot list means 'fully booked, not an error' (clarifying response semantics), specifies price currency details (EUR base plus optional secondary), and mentions the pre-filled booking URL. While annotations cover read-only/idempotent safety, the description enriches understanding of output behavior and error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and output details, the second clarifies edge cases and currency information. Every phrase adds value (e.g., '24-hour HH:MM format', 'empty slot list means fully booked'), with no redundant or wasted wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with comprehensive annotations and full schema coverage, the description provides strong contextual completeness by explaining output format, currency handling, and empty result semantics. The only minor gap is the lack of an output schema (not provided), but the description compensates well by detailing the return structure. It could slightly improve by mentioning rate limits or authentication needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters (date format/constraints, currency behavior, participants range). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline but doesn't provide extra semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('Check') and resource ('real-time appointment availability') with precise output details (start times, duration, URL, pricing). It effectively distinguishes from siblings like 'find_next_available' (which likely finds next slots rather than checking a specific date) and 'get_pricing' (which might retrieve pricing without availability).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('for a specific date') and implicitly distinguishes it from alternatives by specifying its scope (date-specific availability with pricing). However, it doesn't explicitly state when NOT to use it or name specific sibling alternatives for comparison, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_next_availableFind next available datesARead-onlyIdempotentInspect
Scan upcoming days and return the first N dates that have available slots. Use this when a visitor has no specific date in mind or when a requested date is fully booked.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | No | Optional ISO 4217 currency code for the pricing display on each matched date. Defaults to ISK. Invalid codes silently fall back to ISK. | |
| max_dates | No | Maximum number of dates to return (max 5). | |
| days_ahead | No | How many days ahead to scan (max 14). | |
| start_date | No | Optional starting date (YYYY-MM-DD) for the scan. Defaults to today. | |
| participants | No | Number of people attending. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds useful context about scanning 'upcoming days' and handling fully booked scenarios, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or what 'available slots' means in detail. With annotations covering core traits, a 3 is appropriate as the description adds some value but not rich behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, followed by usage guidelines. Every sentence earns its place with zero waste, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema), the description is complete enough for selection and basic use, with clear purpose and guidelines. However, it lacks details on return values (e.g., format of dates, what 'available slots' entails) which would be helpful since there's no output schema, slightly reducing completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter well-documented in the schema (e.g., defaults, constraints, fallback behavior). The description doesn't add meaning beyond the schema, as it mentions no parameters. Baseline 3 is correct when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('scan upcoming days', 'return the first N dates') and resources ('available slots'), and it distinguishes this from sibling tools by explaining it's for when a visitor has no specific date or when a requested date is fully booked, unlike check_availability which likely requires a specific date.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('when a visitor has no specific date in mind or when a requested date is fully booked'), providing clear context for selection over alternatives like check_availability, which presumably requires a specific date input.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_booking_urlGenerate pre-filled booking URLARead-onlyIdempotentInspect
Generate a pre-filled booking URL that opens mink.is/booknow with the date and participant count already selected. Use this as the final step in a booking conversation to hand the visitor off to checkout.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Date in YYYY-MM-DD format. | |
| participants | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, indicating safe, repeatable operations. The description adds valuable context by specifying that it 'opens mink.is/booknow' (impending navigation) and is for 'final step' handoff, which clarifies the tool's role in workflow beyond annotations. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences that are front-loaded with the core purpose and followed by usage guidance. Every word earns its place, with no redundancy or fluff, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description covers purpose, usage, and behavioral context well. It doesn't explain return values (no output schema), but annotations handle safety and idempotency. The description could slightly enhance completeness by mentioning parameter defaults or constraints, but it's largely adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only 'date' parameter has a description). The description mentions 'date and participant count already selected,' which aligns with the two parameters but doesn't add detailed semantics beyond the schema's pattern and constraints. With moderate coverage, the description provides basic mapping but minimal extra insight, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a pre-filled booking URL') and resource ('mink.is/booknow'), specifying that it pre-fills date and participant count. It distinguishes from siblings like check_availability or get_pricing by focusing on URL generation for final checkout handoff, not availability checking or pricing queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'as the final step in a booking conversation to hand the visitor off to checkout.' This provides clear context for usage versus alternatives like check_availability (for initial availability checks) or find_next_available (for finding open slots), though it doesn't explicitly name exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_business_infoGet business hours and locationARead-onlyIdempotentInspect
Return business operating hours, full street address, GPS coordinates, phone, email, and booking policy for Mink Viking Experience in Reykjavik.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent, and closed-world behavior. The description adds value by specifying the exact data returned (hours, address, coordinates, etc.) and the business context, which helps the agent understand the scope and content of the response beyond the generic safety hints provided by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently lists all returned data points and specifies the business context. Every word contributes essential information without redundancy or fluff, making it front-loaded and highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, rich annotations), the description is nearly complete. It clearly states what data is returned and for which business. A minor gap is the lack of explicit output format details (e.g., structured vs. text), but annotations cover behavioral aspects well, making this adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description appropriately adds no parameter details, as none are needed, and instead focuses on the output semantics. This meets the baseline of 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return') and enumerates the exact resources provided: business operating hours, full street address, GPS coordinates, phone, email, and booking policy for a specific business. It distinguishes itself from siblings like check_availability or get_pricing by focusing on static business information rather than dynamic availability or pricing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying the business name and location, suggesting this tool is for retrieving fixed business details. However, it does not explicitly state when to use this versus alternatives like get_booking_url or find_next_available, nor does it provide exclusion criteria (e.g., when not to use it).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingGet current pricingARead-onlyIdempotentInspect
Return current pricing for the Mink Viking Experience including per-person rates, print pricing, gift card validity, and what is included in every session. Rates are live from the shop's multi-currency exchange-rate cache (WCML).
| Name | Required | Description | Default |
|---|---|---|---|
| currency | No | Optional ISO 4217 currency code for the primary display currency (e.g. USD, GBP, CAD, AUD, CNY, ISK). EUR is always included as the base. Defaults to ISK. Invalid codes silently fall back to ISK. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent, and open-world behavior. The description adds valuable context beyond this: it specifies that rates are 'live from the shop's multi-currency exchange-rate cache (WCML)', which informs about data freshness and source, and mentions currency handling (EUR as base, fallback to ISK). This enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by supporting details in a second sentence. Every sentence adds value: the first specifies what is returned, and the second explains data sources and currency handling. It is efficient with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, rich annotations, no output schema), the description is mostly complete. It covers purpose, data scope, and behavioral context like cache usage. However, it lacks details on output format or error handling, which could be useful since there's no output schema, leaving a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'currency' fully documented in the schema. The description adds minimal semantic value by mentioning currency in the context of exchange rates and fallback behavior, but it does not provide additional details beyond what the schema already covers, such as specific usage examples or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Return' and specifies the exact resource: 'current pricing for the Mink Viking Experience' with detailed content (per-person rates, print pricing, gift card validity, inclusions). It distinguishes from siblings like check_availability or get_booking_url by focusing solely on pricing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: to retrieve pricing information, including specific details like rates and inclusions. It implies usage for pricing queries but does not explicitly state when not to use it or name alternatives among siblings, though the focus on pricing naturally differentiates it from availability or booking tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!