Skip to main content
Glama

search_library

Search your Zotero library with keyword queries to find items by title, abstract, or attachment text. Ranked by BM25 relevance, supports filtering by collection and item type, returns metadata including creators, date, and attachment count.

Instructions

Find which items in your Zotero library match a keyword query.

Uses BM25 ranking over title, abstract, and indexed attachment full text.

Args: query: Search keywords (e.g. "transformer attention" not "what papers discuss attention?") collection_key: Optional Zotero collection key to filter results item_type: Optional case-insensitive Zotero parent itemType filter. Canonical values are surfaced in the tool schema and description, and values not present in the current search index return no items plus a warning instead of silently filtering everything out. limit: Requested results to return (default: 10, capped at 25). limit=0 returns no items, while the response still reports total and returned_count so callers can see how many results matched and how many were actually included under items. include_attachments: Include resolved attachment metadata in each returned item. Defaults to False; otherwise attachment_count is still present without the heavier attachment array.

Returns: JSON with ranked Zotero items under items, including key, title, creators, date, score, abstract text truncated to 500 characters, attachment_count, collections as {key, name} pairs, optional attachments when include_attachments=True (each attachment keeps only safe summary fields such as key, title, contentType, and linkMode), optional plain-text snippets, warnings for invalid collection_key / item_type filters or empty queries, and limit metadata. Duplicate parent items that share a DOI or URL are collapsed before limiting, preferring the richer record when duplicates exist. total reports the deduplicated match count and returned_count reports how many items were actually returned.

Canonical item_type values: artwork, audioRecording, bill, blogPost, book, bookSection, case, computerProgram, conferencePaper, dataset, dictionaryEntry, document, email, encyclopediaArticle, film, forumPost, hearing, instantMessage, interview, journalArticle, letter, magazineArticle, manuscript, map, newspaperArticle, patent, podcast, preprint, presentation, radioBroadcast, report, standard, statute, thesis, tvBroadcast, videoRecording, webpage. If the requested value is not present in the current search index, the response returns no items and a warning. Duplicate parent items that share a DOI or URL are collapsed before limiting, preferring the richer record when duplicates exist. Response metadata includes returned_count for the items included under items and total for the deduplicated match count.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
collection_keyNo
item_typeNoOptional case-insensitive Zotero parent itemType filter. Canonical values are `artwork`, `audioRecording`, `bill`, `blogPost`, `book`, `bookSection`, `case`, `computerProgram`, `conferencePaper`, `dataset`, `dictionaryEntry`, `document`, `email`, `encyclopediaArticle`, `film`, `forumPost`, `hearing`, `instantMessage`, `interview`, `journalArticle`, `letter`, `magazineArticle`, `manuscript`, `map`, `newspaperArticle`, `patent`, `podcast`, `preprint`, `presentation`, `radioBroadcast`, `report`, `standard`, `statute`, `thesis`, `tvBroadcast`, `videoRecording`, `webpage`. If the requested value is not present in the current search index, the response returns no items and a warning.
limitNoRequested results to return. Values below 0 are treated as 0, values above 25 are clamped to 25, and the response reports `requested_limit`, `applied_limit`, `limit_cap`, and `limit_capped`.
include_attachmentsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully covers behavioral details: query interpretation, filters, limit clamping and special behavior at 0, attachment inclusion control, duplicate collapse, response structure, and warnings for invalid filters. This exceeds typical expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-structured with Args/Returns sections and front-loaded purpose. Minor redundancy (duplicate collapse mentioned twice) but every sentence is informative and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description still explains return fields (items, total, returned_count, warnings) comprehensively. It covers all parameter behaviors and edge cases, making it complete for an agent to use effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 40%, but the description compensates thoroughly. It explains query semantics (BM25, example), collection_key purpose, item_type canonical values and fallback behavior, limit clamping and reporting, and include_attachments trade-offs. This adds significant value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to find items in Zotero library matching a keyword query. It specifies BM25 ranking over title, abstract, and attachment full text, which distinguishes it from sibling tools like search_within_item that focus on a single item.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on query format (e.g., 'transformer attention' not full questions) and explains duplicate handling. However, it does not explicitly contrast with sibling tools like search_within_item or list_collections, leaving some inference to the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eric-tramel/zoty'

If you have feedback or need assistance with the MCP directory API, please join our Discord server