Skip to main content
Glama

audioknihy.cz Catalog

Server Details

Czech audiobook catalog MCP server. Search 12 000+ titles + compare prices across 5 partners.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: browsing genres, searching, retrieving details for specific entities (audiobook, author, narrator), comparing offers, and listing partners. Even similar tools like compare_all_offers and find_cheapest_offer are differentiated by returning full vs. minimal price data.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., browse_genres, get_audiobook, search_by_filters). No mixing of conventions or ambiguous abbreviations.

Tool Count5/5

With 9 tools, the set is well-scoped for a catalog server covering genres, searches, detail views, price comparison, and partner listing. Each tool addresses a specific need without redundancy.

Completeness5/5

The tool surface covers the full range of expected operations for an audiobook catalog: browsing, searching, retrieving details for audiobooks, authors, narrators, comparing offers, and listing partners. There are no obvious gaps for a read-only informational service.

Available Tools

9 tools
browse_genresAInspect

List all 35 Czech audiobook genres with audiobook counts. Discovery entry point — agents enumerate genres before drilling into a specific one.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states the tool lists genres with counts, implying a read-only operation with no side effects. No hidden behaviors are indicated, and the description is straightforward.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the main action (list genres with counts) and providing use context. Every word serves a purpose; no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description fully covers what the tool does: return all genres with audiobook counts. It also explains its role in the user workflow. No additional information is necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so baseline is 4. The description does not add parameter information, but none is needed. Schema coverage is 100% (no params), so no gap to fill.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List all 35 Czech audiobook genres with audiobook counts', specifying the exact action and resource. It distinguishes from sibling tools like search_audiobooks or get_audiobook by being a discovery entry point for enumerating genres.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Discovery entry point — agents enumerate genres before drilling into a specific one', which provides clear context on when to use this tool. It lacks explicit 'when not to use' statements, but the context is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_all_offersAInspect

Full price comparison table across every active retail partner for one audiobook. Returns price + format + last-seen timestamp per partner — agents can rank or filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_slugYes
author_slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes a read operation returning data but does not disclose any behavioral traits such as authentication needs, rate limits, data staleness, or side effects. Basic transparency is present but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys purpose and output, with no redundant words. It is appropriately front-loaded and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameters, no output schema, and no annotations, the description covers core functionality ('full price comparison table ... returns price + format + last-seen timestamp'). It states the scope ('one audiobook') and hints at usage ('rank or filter'). Minor gaps: does not explicitly mention required parameters or pagination if applicable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the description does not explain what author_slug and work_slug represent or their constraints beyond the schema's pattern. The description adds no semantic value beyond the parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs full price comparison across retail partners for one audiobook, returning price, format, and last-seen timestamp. It uses a specific verb ('compare') and resource ('offers across partners'), distinguishing it from siblings like find_cheapest_offer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (full comparison table, rank/filter) but lacks explicit when-not-to-use or alternative tools like find_cheapest_offer. It provides clear context but no exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_cheapest_offerCInspect

Lowest-price active offer for an audiobook across all affiliate partners. USP — only Czech site that compares audiobook prices across all major retail partners.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_slugYes
author_slugYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It states 'active offer' and 'lowest-price' but omits details like input requirements, error handling for missing offers, output format, or rate limits. The description is too brief to cover behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: the first states the core function, the second adds a unique selling point. No redundant words or digressions. Efficiently communicates the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is incomplete. It lacks information about return values, error cases, or how the tool behaves when no offers are found. The tool is simple but the description does not cover essential context for an agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the tool description does not explain what work_slug and author_slug represent or how to use them. The parameters are only defined by their schema type and pattern, providing no semantic guidance to the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the lowest-price active offer for an audiobook across all affiliate partners. It differentiates from siblings like compare_all_offers (which likely lists all offers) and search_audiobooks (which searches by criteria). The USP adds clarity about the tool's unique value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like compare_all_offers or browse_genres. The description implies it's for finding the cheapest price but does not specify when to choose it over comparing all offers or searching for audiobooks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_audiobookAInspect

Full audiobook detail by author + work slug — title, description, cover, runtime, ISBN, genres, narrator, publisher, and the full table of active offers across retail partners.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_slugYes
author_slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It identifies the tool as a retrieval operation but does not explicitly state it is read-only, idempotent, or mention any side effects, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently lists the returned information. It could be slightly shorter but is appropriately concise and front-loads the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description comprehensively lists the returned fields (title, description, cover, runtime, ISBN, genres, narrator, publisher, offers). Given no output schema, this provides adequate completeness for a single item retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage, but the description adds meaning by stating 'by author + work slug'—indicating the parameters identify the audiobook. However, it does not explain the format or pattern of the slugs beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves full audiobook detail by author and work slug, listing many specific fields. It distinguishes from siblings like search or compare tools by focusing on a single audiobook's complete details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied: use when you have author and work slugs and want full details. No explicit when-to-use or when-not-to-use compared to sibling tools like search_audiobooks or compare_all_offers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_authorAInspect

Author profile + their audiobook works. Returns biography, photo, country, and the full list of audiobook editions where this person is credited as author / co-author / editor.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full transparency burden. It describes the return data (biography, photo, country, editions), but does not disclose behavioral traits like read-only nature, required permissions, or error behaviors. Acceptable but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the tool's purpose and outputs. It is front-loaded with the core function and includes specific details without redundancy. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (1 param) and no output schema/annotations, the description covers the return value but omits explanation of the input parameter and potential errors. The agent lacks guidance on how to construct the slug or handle missing authors, leaving gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'slug' with no description and schema coverage is 0%. The tool description does not explain what the slug represents (e.g., author identifier). The description fails to compensate for the lack of parameter documentation, leaving ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool retrieves an author's profile and their audiobook works, listing specific fields (biography, photo, country, list of editions). The verb 'get' combined with resource 'author' clearly distinguishes it from siblings like 'get_narrator' or 'search_audiobooks'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when an author's details including works are needed, but provides no explicit guidance on when to use this tool versus alternatives (e.g., search_audiobooks for broader search). No when-not-to-use scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_narratorBInspect

Narrator profile + audiobooks they have read. Czech audiobook listeners often choose by narrator (performance quality often matters more than the underlying text).

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as authentication needs, rate limits, error handling, or side effects. The cultural context is not about behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states function, second adds relevant context. No fluff, but could be more concise if combined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description lacks details about returned profile fields or audiobook list (e.g., pagination, ordering). The single parameter is not explained, leaving the agent underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the 'slug' parameter, and the description does not explain what 'slug' is or how to obtain it. No examples or format guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Narrator profile + audiobooks they have read', which is a specific function. It distinguishes from sibling tools like get_author or get_audiobook, and adds cultural context about Czech listeners.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when narrator performance matters, but does not explicitly state when to use vs alternatives, nor provide when-not or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_partnersAInspect

List active retail partners with audiobook counts. Required for transparency / disclosure when an agent needs to explain HOW audioknihy.cz monetises recommendations (we are an affiliate aggregator, not a retailer).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. 'List' implies read-only, and context suggests no side effects. Could explicitly state safe behavior, but adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second provides usage context. No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no params or output schema, description fully covers what the tool does and when to use it. Complete for a simple list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters; baseline 4 per rubric. No parameter documentation needed, and description adds no param details, which is fine.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'list' and resource 'active retail partners with audiobook counts', clearly distinguishing it from siblings like browse_genres or compare_all_offers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Required for transparency / disclosure' when explaining monetization. Provides business context but lacks explicit exclusions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_audiobooksAInspect

Full-text + fuzzy search across 12 367 Czech audiobooks (title, author, narrator). Use when an agent needs to resolve a free-form query into one or more audiobook records.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. States 'Full-text + fuzzy search' but lacks details on ranking, pagination, or side effects. Scope (12,367 books) is helpful but insufficient for complete transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler. Front-loaded with key information: search type, scope, fields, and usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers what is searched and when to use, but does not describe output format, sorting, or pagination. Given no output schema, more detail on return value is needed for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description implies the 'query' parameter via 'free-form query' but does not formally describe it or the 'limit' parameter. With low coverage, description should compensate but does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it performs full-text and fuzzy search across 12,367 Czech audiobooks by title, author, narrator. Distinguishes from siblings like search_by_filters and get_audiobook by emphasizing free-form query resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when an agent needs to resolve a free-form query into one or more audiobook records.' Provides clear usage context, but does not mention when not to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_filtersAInspect

Multi-facet search of audiobooks: combine genre, narrator, max price, year range. Use when an agent has structured constraints rather than a free-form query. V1 supports genre + narrator + price_max + year filters; partner filter coming in V2.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
year_maxNo
year_minNo
genre_slugNo
partner_slugNo
narrator_slugNo
price_max_czkNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. It incorrectly states partner filter is coming in V2, while the input schema includes partner_slug, creating a contradiction. No mention of read-only nature or result format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose and usage, followed by version info. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no output schema, and no annotations, the description lacks details on result behavior, pagination (limit), filter logic, and the partner_slug discrepancy. Incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description adds some context by listing filters (genre, narrator, max price, year range), but does not map to exact parameter names or explain semantics for limit or how filters combine.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's purpose as a multi-facet search for audiobooks, listing specific filters (genre, narrator, max price, year range). It distinguishes from siblings by contrasting with free-form query tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when an agent has structured constraints rather than a free-form query.' Also notes version limitations, guiding appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources