Skip to main content
Glama

Server Details

Gutendex MCP — wraps Gutendex API for Project Gutenberg books (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-gutendex
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

Most tools have distinct purposes, but there is some overlap between 'books_by_topic' and 'search_books', which both search books but with different parameters, potentially causing confusion. The 'ask_pipeworx' tool is a meta-tool that overlaps with the functionality of other tools by providing a simplified interface, which could lead to ambiguity in tool selection.

Naming Consistency2/5

Naming is inconsistent with a mix of patterns: some use verb_noun (e.g., 'get_book', 'search_books'), others use noun-based phrases (e.g., 'books_by_topic', 'popular_books'), and there are standalone verbs (e.g., 'forget', 'recall', 'remember'). The 'ask_pipeworx' and 'discover_tools' deviate further with compound names, creating a chaotic overall pattern.

Tool Count4/5

With 9 tools, the count is reasonable for a server focused on book data and memory management. It covers core functionalities without being overly bloated, though the inclusion of meta-tools like 'ask_pipeworx' and 'discover_tools' might slightly inflate the count relative to the primary domain.

Completeness4/5

For the Project Gutenberg book domain, the tools provide good coverage with search, retrieval by ID, topic browsing, and popularity listings. The memory tools (remember, recall, forget) add useful session management. However, there are minor gaps, such as no direct update or deletion of book data, but these are likely outside the server's scope, so agents can work around them effectively.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it accepts natural language questions, automatically selects and invokes appropriate tools, and returns results. However, it doesn't mention potential limitations like response time, data source availability, or error handling for ambiguous questions, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. The first sentence states the core functionality, the second explains the mechanism, and the third provides usage guidance with examples. Every sentence earns its place, and the information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language processing with automatic tool selection), the description is mostly complete. It explains the input (natural language questions) and the behavior (automatic tool selection). However, with no output schema and no annotations, it doesn't describe the format or structure of returned answers, which is a minor gap for a tool that could return diverse data types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage (the 'question' parameter is well-documented in the schema), so the baseline is 3. The description adds value by providing context: it explains that questions should be in 'plain English' or 'natural language' and gives three concrete examples that illustrate the expected format and scope of questions, going beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes itself from sibling tools (like search_books or get_book) by being a natural language interface rather than a structured query tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly suggesting not to use structured sibling tools for natural language queries) and includes three concrete examples ('What is the US trade deficit with China?', 'Look up adverse events for ozempic', 'Get Apple's latest 10-K filing') that illustrate appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

books_by_topicCInspect

Browse books by subject or topic (e.g., 'science fiction', 'philosophy'). Returns matching titles, authors, and IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesTopic or subject keyword to filter books by (e.g. "science", "love", "history").
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'browse' and filtering, but doesn't disclose key traits like whether it's read-only, pagination behavior, rate limits, or authentication needs. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without waste. Every word contributes to understanding the tool's function, making it appropriately sized and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain return values, error conditions, or behavioral nuances needed for effective use. For a tool with no structured support, the description should provide more context to compensate, which it fails to do adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds little beyond the input schema, which has 100% coverage and fully documents the single 'topic' parameter with examples. No additional syntax, constraints, or format details are provided. With high schema coverage, the baseline is 3, as the description doesn't compensate but doesn't detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('browse') and resource ('Project Gutenberg books') with a specific filtering mechanism ('by topic or subject keyword'). It distinguishes from siblings like 'get_book' (retrieve specific book), 'popular_books' (list trending), and 'search_books' (general search), though not explicitly named. The purpose is specific but could be more precise about differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'search_books' or 'popular_books'. The description implies usage for topic-based filtering, but lacks context on prerequisites, exclusions, or comparative scenarios. This leaves the agent to infer usage without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it's a search operation that returns the most relevant tools, and it should be called first in specific scenarios. However, it lacks details on rate limits, error handling, or authentication needs, which would be helpful for a discovery tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose and usage guidelines without any wasted words. Every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search/discovery function), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers purpose and usage well but could benefit from more behavioral details (e.g., output format, error cases) to fully compensate for the lack of structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema, as it mentions searching by describing needs but doesn't elaborate on parameter interactions or provide additional context not already in the schema descriptions for 'query' and 'limit'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resource ('Pipeworx tool catalog'), distinguishing it from sibling tools that focus on books rather than tools. It explicitly mentions searching by describing needs and returning relevant tools with names and descriptions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly indicates when to use this tool (large catalog, task-specific needs) versus alternatives, though it doesn't name specific siblings, the context implies it's for tool discovery vs. book-related operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but doesn't specify whether deletions are permanent, reversible, require specific permissions, or have side effects (e.g., affecting related data). For a mutation tool, this lack of detail is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded with the core action ('Delete'), making it immediately scannable and earning its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't address critical aspects like what happens after deletion (e.g., confirmation message, error handling), behavioral traits (e.g., idempotency), or how it fits within the broader memory management context implied by sibling tools like 'recall' and 'remember'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this, such as key format, examples, or constraints. Since the schema already provides adequate parameter documentation, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential siblings like 'recall' or 'remember' that might also manipulate memories, leaving room for ambiguity in a memory-focused toolset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or relationships to sibling tools like 'recall' (which might retrieve memories) or 'remember' (which might create them), leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bookAInspect

Get full details for a book by ID. Returns title, author, publication year, language, available formats, and download count.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe numeric Project Gutenberg book ID.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it indicates this is a read operation ('Get detailed information'), it doesn't disclose important behavioral traits like authentication requirements, rate limits, error conditions, or what 'detailed information' specifically includes. For a tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a simple lookup tool and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage) but lack of annotations and output schema, the description is adequate but incomplete. It covers the basic purpose but doesn't provide enough context about what information is returned or behavioral constraints, leaving gaps for the agent to understand the tool fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'id' fully documented in the schema. The description adds minimal value beyond the schema by mentioning 'numeric Project Gutenberg book ID' which essentially repeats the schema description. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed information'), resource ('Project Gutenberg book'), and scope ('by its numeric ID'), distinguishing it from sibling tools like books_by_topic, popular_books, and search_books which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context that this tool is for retrieving information about a specific book by ID, implying it should be used when the numeric ID is known. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the tool's function (retrieve/list memories) and context (saved earlier in current or previous sessions), which is adequate. However, it lacks details on error handling (e.g., what happens if a key doesn't exist), performance aspects, or data format of retrieved memories, leaving some behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality in the first sentence, followed by usage context in the second. Both sentences earn their place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieve/list operations), no annotations, and no output schema, the description does a good job of covering purpose and usage. However, it lacks details on return values (e.g., format of retrieved memories or listed keys) and error conditions, which would be needed for full completeness in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds value by clarifying the semantics: it explains that omitting the key parameter triggers listing all memories, which provides context beyond the schema's technical description. This elevates the score above the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory by key', 'all stored memories'). It distinguishes itself from sibling tools like 'remember' (for storing) and 'forget' (for deleting), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter to list all memories, offering clear usage instructions without being misleading.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool performs a write operation ('store'), specifies persistence characteristics ('authenticated users get persistent memory; anonymous sessions last 24 hours'), and implies it's for session-scoped data. However, it doesn't cover potential errors, rate limits, or security constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core action, the second provides usage context, and the third adds important behavioral details. Every sentence earns its place with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (write operation with persistence nuances), no annotations, and no output schema, the description is reasonably complete. It covers purpose, usage, and key behavioral traits like persistence rules. However, it lacks details on return values, error conditions, or specific limitations, leaving some gaps for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds minimal semantic context by mentioning what can be stored ('findings, addresses, preferences, notes'), but doesn't provide additional syntax, format, or constraints beyond what the schema specifies. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly mention when not to use it or name alternatives. It implies usage for persistence needs but lacks explicit exclusions or comparisons to sibling tools like 'recall' for retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_booksCInspect

Search for books by title or author name. Returns book IDs, titles, authors, and download counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesTitle or author name to search for.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool searches but doesn't disclose behavioral traits such as result limits, pagination, sorting, error handling, or performance characteristics. This is a significant gap for a search tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It front-loads the purpose clearly and uses minimal words to convey the essential action and scope, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a search tool that likely returns complex results, the description is incomplete. It doesn't explain what the output contains (e.g., book metadata, links), how results are structured, or any limitations, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'query' documented as 'Title or author name to search for.' The description adds no additional meaning beyond this, such as search syntax, case sensitivity, or partial matching. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and resource ('Project Gutenberg books'), specifying the search criteria ('by title or author name'). It distinguishes from siblings like 'get_book' (retrieves a specific book) and 'popular_books' (lists trending books), though it doesn't explicitly differentiate from 'books_by_topic' (topic-based search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'books_by_topic' or 'popular_books'. It mentions the search criteria but doesn't specify scenarios where this tool is preferred over siblings, leaving usage context implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.