Skip to main content
Glama

Catalo.ai - Book Discovery

Ownership verified

Server Details

Book discovery using an AI-curated book catalog that eliminates hallucinations and surfaces lesser-known titles.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
explore_booksB
Destructive
Inspect

Search and filter fiction books. Filters are far reaching, including things like genre, mood, theme, pacing and more. Call list_filters first to discover available filter identifiers and their valid values. Returns a list of books matching the criteria. Always call list_filters() before calling explore_books(). %!(EXTRA string=https://catalo.ai/details/:id)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (defaults to 50, you are encouraged to set this lower if you are searching for a specific book, or increase it if you are searching for a large number of books. Max 100.)
queryNoText search query (searches title, author, or story depending on query_segment)
seriesNoFilter books by whether they are part of a series, possible values: 'First in Series or Standalone', 'Part of Series'
filtersNoJSON object mapping filter identifiers to values. Call list_filters to see all available filters and their valid values. Selection filters accept a string or array of strings (e.g. {"genre": ["Fantasy", "Mystery"], "mood": "Dark"}). Range filters accept a "min,max" string (e.g. {"pages": "200,400", "year": "2000,2024"}). Prefix a value with "!" to exclude it (e.g. {"triggers": ["!Violence"]}).
includeNoJSON object mapping filter identifiers to inclusion criteria: "Any" (book must have at least one selected value) or "All" (book must have all selected values). Only relevant for multi-value selection filters. Example: {"genre": "Any", "theme": "All"}
similarNoComma-separated book IDs to find similar books
bookmarkNoFilter books by whether they are bookmarked by the user, possible values: 'want', 'read', 'dropped'
query_segmentNoSearch scope: 'Title', 'Author', 'Story', 'All'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true, which is unusual for a search operation, but the description offers no explanation for this destructive behavior or what side effects occur. Mentions 'Returns a list of books' but provides no details on result structure, pagination behavior, or the implications of openWorldHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains redundant instruction (repeats 'Call list_filters first' and 'Always call list_filters() before calling explore_books()'). Includes garbage formatting artifact '%!(EXTRA string=https://catalo.ai/details/:id)' which appears to be a template error, significantly degrading professional structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given the schema quality, but gaps remain: no explanation for destructive annotation, no output schema coverage (description only says 'Returns a list'), and the formatting error suggests incomplete editing. The complex JSON filter syntax is handled in schema descriptions rather than main description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the parameter documentation burden effectively. The description adds workflow context (prerequisite list_filters call) that helps understand the filters parameter, but doesn't add syntax details beyond what's already in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Search and filter) and resource (fiction books) clearly. Distinguishes from list_bookmarked_books by focusing on general catalog exploration rather than user bookmarks, but misses opportunity to contrast with get_book_details (search vs. specific lookup).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite workflow: 'Call list_filters first' and 'Always call list_filters() before calling explore_books()'. This is strong sequential guidance. However, lacks explicit 'when not to use' (e.g., vs. get_book_details for known IDs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_book_detailsA
Destructive
Inspect

Get full details for up to 20 books (as returned by explore_books). When showing showing a book to the user, include a link to the website for the book, eg. View on Catalo.ai Always show what readers liked (for_readers) and disliked (against_readers), mood and themes. Include anything else that might be relevant to the users query.

ParametersJSON Schema
NameRequiredDescriptionDefault
idsYesThe book IDs
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: the 20-book limit, specific output fields (for_readers, against_readers, mood, themes), and link formatting requirements. However, it fails to explain the destructiveHint=true annotation—what data is modified/deleted when 'getting' details—leaving a significant behavioral gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently structured. The presentation guidelines earn their place as they specify required output formatting. Minor deduction for the typo 'showing showing' and slightly verbose link example.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter with full schema coverage and no output schema, the description adequately compensates by detailing expected return fields (for_readers, against_readers, etc.) and presentation requirements. It would benefit from explaining the destructive annotation or return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('The book IDs'), the baseline is 3. The description adds the constraint that up to 20 IDs can be processed and implies they originate from explore_books, but doesn't specify ID format, syntax, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get full details for up to 20 books' with specific verb, resource, and quantity limit. It explicitly distinguishes from sibling 'explore_books' by noting these are IDs 'as returned by explore_books', establishing the workflow relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the intended workflow by referencing 'explore_books' as the source of IDs, and provides specific presentation guidelines (what fields to show, link format). However, it lacks explicit 'when not to use' guidance or comparison to alternatives like 'list_bookmarked_books'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_bookmarked_booksA
Destructive
Inspect

List the current user's bookmarked books filtered by bookmark state. Use this tool when the user asks about their reading list, books they want to read, books they have read, or books they dropped.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesThe bookmark state to filter by. Possible values: 'want', 'read', 'dropped'.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description uses 'List' which strongly implies a safe read-only operation, but annotations declare readOnlyHint: false and destructiveHint: true. This creates a dangerous contradiction where the agent might invoke a destructive operation believing it to be a harmless query.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficiently structured sentences with zero redundancy; the first establishes the core function and the second provides usage guidance, delivering maximum information density with appropriate front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations indicate this is a destructive, non-read-only operation, the description fails to explain why a 'list' operation would modify or destroy data. This omission creates a critical gap for an operation that appears to be a simple query but behaves as a mutation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage documenting the 'state' parameter values ('want', 'read', 'dropped'), the description adds semantic context by mapping these technical states to user intents ('books they want to read', 'books they have read', etc.), helping the agent understand parameter semantics beyond raw enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (List), clear resource (bookmarked books), and filtering mechanism (bookmark state). It effectively distinguishes from siblings like save_bookmark (write vs. read) and explore_books (personal library vs. general discovery).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence explicitly maps user intents ('reading list', 'books they want to read', etc.) to the tool's filtering capability, guiding invocation based on query context. However, it does not explicitly name alternative tools to avoid (e.g., when not to use this).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_filtersA
Destructive
Inspect

List all available book filter identifiers and their valid values for use with the filters argument of explore_books.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description contradicts the annotations: it describes a read operation ('List'), but annotations specify 'readOnlyHint: false' and 'destructiveHint: true'. This creates fatal confusion about whether the tool actually modifies state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence that front-loads the action and resource while appending the usage context, with no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While it conceptually describes the return values ('filter identifiers and their valid values'), the lack of output schema combined with the annotation contradiction leaves the agent uncertain about the response structure and safety profile of the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters in the input schema, no additional parameter documentation is required. The description meets the baseline for this case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and resource ('book filter identifiers and their valid values') and clearly distinguishes this tool from siblings by explicitly referencing its relationship to 'explore_books'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context by stating the output is 'for use with the filters argument of explore_books', effectively guiding the agent on when to invoke this tool relative to its sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_bookmarkA
Destructive
Inspect

Save or update a bookmark for a book. Optionally include a personal note (max 140 characters).

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoA personal note about the book. Max 140 characters.
stateYesThe bookmark state. Possible values: 'want', 'read', 'dropped'.
book_idYesThe ID of the book to bookmark.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description confirms the write operation nature ('save or update') consistent with annotations (readOnlyHint: false, destructiveHint: true), and mentions the optional nature of notes. However, it does not elaborate on the destructive implications of updates, the non-idempotent behavior, or what happens to existing data when updating.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes core functionality, second covers the optional note constraint. Information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of comprehensive annotations covering safety/destructive hints and full schema coverage, the description provides adequate context for a straightforward 3-parameter mutation tool. It appropriately omits return value details since no output schema exists to describe them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters (book_id, state, note). The description mentions the note parameter's optional status and character limit, but this largely restates schema information. It adds no semantic context for book_id or state parameters, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Save or update a bookmark for a book' — specific verbs (save/update) plus specific resource (bookmark). It effectively distinguishes from read-only siblings like list_bookmarked_books, get_book_details, and explore_books by emphasizing the mutation aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through 'Save or update' clarifying it handles both creation and modification, but provides no explicit guidance on when to use this versus list_bookmarked_books (which reads bookmarks) or other siblings. No when-not-to-use or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources