Skip to main content
Glama

Server Details

Search 12K rare pre-modern texts translated to English: philosophy, religion, science, literature.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Embassy-of-the-Free-Mind/sourcelibrary-v2
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 10 of 10 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: browsing, semantic search, literal search, image search, reading, quoting, and feedback. No two tools overlap in function, and descriptions explain when to use each.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., get_book, search_library) using snake_case. The naming clearly indicates the action and resource, making tools predictable.

Tool Count5/5

10 tools is ideal for a library MCP: covers browsing, multiple search modes, reading, and feedback without being excessive. Each tool serves a distinct need.

Completeness5/5

The tool set covers the full lifecycle of library exploration: discover (list_books, search_library), find passages (search_translations, search_concept, search_within_book), get overview (get_book), read details (get_book_text), cite (get_quote), and improve (submit_feedback). No obvious gaps.

Available Tools

10 tools
get_bookA
Read-onlyIdempotent
Inspect

Get a book's AI-generated summary, chapter list, edition metadata, DOI, and page counts. THIS IS THE RIGHT FIRST CALL whenever the user has named a specific author or work — the summary is typically a multi-paragraph orientation covering the book's argument, structure, and significance, often answering the question without any further searching. Pair with get_book_text to read selected chapters, or search_within_book to locate passages inside it.

ParametersJSON Schema
NameRequiredDescriptionDefault
book_idYesThe book ID
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds value beyond annotations (readOnlyHint, idempotentHint) by detailing the return types (summary, chapters, metadata) and the summary's nature (multi-paragraph orientation covering argument, structure, significance). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise, front-loads the outputs, then provides usage guidance. Every sentence is informative without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter, no output schema, and good annotations, the description fully covers what the tool returns, when to call it, and how it relates to siblings, making it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter (book_id) described adequately. Description does not add extra semantic detail, but the schema is sufficient. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get a book's AI-generated summary, chapter list, edition metadata, DOI, and page counts,' specifying the verb and resource. It distinguishes from siblings by mentioning 'get_book_text' and 'search_within_book' as complementary tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'THIS IS THE RIGHT FIRST CALL whenever the user has named a specific author or work' and advises pairing with 'get_book_text' or 'search_within_book' for further needs, providing clear when-to-use and alternative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_book_textA
Read-onlyIdempotent
Inspect

Read a book's text. Preferred: use the chapter param to read one chapter at a time (includes [Page N] markers for citation) — call get_book first to get the chapter list. Alternatively, use from/to for explicit page ranges (e.g. from=1 to=50). TRUNCATION: the response always includes truncated: true/false. When truncated=true, the truncation_note field gives the exact next from/to values to call — this means content was cut short by a page-budget limit, NOT that the book ended. An AI agent MUST NOT infer end-of-book from pages_returned alone; check truncated first. Budget limits apply to anonymous callers (~50 pages per 24h); sign in at sourcelibrary.org/auth/signin or get an API key at sourcelibrary.org/developers for higher limits.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd page number (inclusive). Recommended chunk size: 50 pages. If the response has truncated=true, use the next from/to from truncation_note.
fromNoStart page number (inclusive). Use with to for explicit page ranges.
partNoPart number (1-based) for large chapters split into multiple parts
formatNojson (default, structured with per-page fields) or plain (concatenated text with page markers)
book_idYesThe book ID
chapterNoChapter index (0-based). Preferred over from/to — returns pre-structured chapter text with embedded [Page N] markers.
contentNoWhich text to include: ocr (original language), translation (English), or both (default)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, non-destructive. The description adds that chapter mode includes [Page N] markers for citation, a behavioral detail beyond annotations. It does not disclose return format or other behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences. It front-loads the purpose and immediately provides actionable instructions. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers primary use cases (chapter and page range) and provides sequencing advice. It does not explain output format or behavior of parameters like part, but schema descriptions fill gaps. Lacks description of return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 86%, so baseline is 3. The description adds meaning for chapter (citations) and from/to (page ranges), going beyond schema descriptions. Other parameters (part, format, content) are not elaborated, but schema covers them adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads book text and distinguishes between chapter and page range reading. It references get_book for chapter list, providing some sibling differentiation, but does not explicitly contrast with search_within_book or get_book for metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use chapter vs from/to and advises calling get_book first. It lacks exclusions or alternatives beyond get_book, and does not mention when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteA
Read-onlyIdempotent
Inspect

Get exact text of a single page for quoting, with citation URL. ALWAYS use before putting text in quotation marks.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageYesPage number
book_idYesThe book ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds that it returns exact text and a citation URL, which is useful behavioral context. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first provides purpose and return value, second gives a clear usage directive. No filler, every sentence is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with two parameters and no output schema, the description covers purpose, return value, and usage context. It does not address error cases or limits, but is sufficient for an agent to use correctly in most scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes the two parameters. The description adds no additional parameter-level detail beyond naming the resource type (single page). Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get exact text of a single page for quoting, with citation URL.' It specifies the verb, resource, and return value, and distinguishes it from siblings like get_book_text which likely returns full book text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly instructs 'ALWAYS use before putting text in quotation marks,' providing clear when-to-use guidance. It implies the cost of not using it but does not mention alternatives or when-not-to-use, which is acceptable given its narrow purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_booksA
Read-onlyIdempotent
Inspect

Browse the catalog by metadata — filter by author/title fragment, language, category, or translation recency. Returns books with title, author, language, year, and translation progress. Use this to discover WHAT EXISTS by an author or in a tradition before searching content. For content matches (passages on a topic), use search_translations or search_concept instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNo
limitNoMax results (default 100, max 200)
searchNoFilter by title or author
categoryNo
languageNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive. The description adds that it returns specific book fields (title, author, language, year, translation progress), which provides behavioral insight beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two highly informative sentences plus a clear usage directive. No redundancy, well-structured, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers return fields, filter options, and usage context. Lacks an explicit note that limit has a max, but schema covers that. Overall sufficient for a list tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 40% coverage; description explains that search, language, category, and sort (recency) are filters, compensating for the undocumented parameters. However, it does not detail the sort enum values, slightly reducing clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists books filtered by metadata (author, title, language, category, recency) and returns specific fields. It distinguishes from sibling tools like search_translations and search_concept by framing it as a discovery tool for what exists rather than content search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('discover WHAT EXISTS... before searching content') and when not to ('For content matches... use search_translations or search_concept instead'), providing clear context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_conceptA
Read-onlyIdempotent
Inspect

Conceptual / semantic passage search across the whole library. Use when the modern term won't literally appear in historical texts — e.g. "distributed cognition" maps to passages about active intellect, art of memory, wax tablet metaphors; "social contract" maps to pre-Hobbesian discussions of consent and authority. Ranks passages by cosine similarity on Gemini embeddings (768d), so paraphrases and conceptually adjacent phrasings match even when no keyword overlaps. ORIENTATION HINT: if the user named a specific author or work, prefer get_book (returns the book's AI summary + chapter outline) — semantic search is expensive and best reserved for cross-corpus discovery. Prefer search_translations for literal phrases or distinctive single terms; use search_concept when the concept matters more than the wording. Similarity calibration: 0.70+ is a strong match, 0.55–0.70 is worth reading but verify, below 0.55 is mostly conceptual drift. Set max_per_book to diversify results across many books rather than cluster on one source. Each passage carries a snippet_type — quote only "translation" snippets, never "summary". Cross-cultural tip: for pre-modern or non-Western topics, also try source-tradition vocabulary — e.g. for seminal economy try "jing preservation" or "bindu yoga" or "istimnāʾ"; for masturbation try "mollities" (Latin) or "hastamaithuna" (Sanskrit) or "shouyin" (Chinese). The corpus is indexed via period translations that use tradition-internal terminology, so adjacent/euphemistic terms often surface material that modern English keywords miss.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax passages (default 15, max 50)
queryYesA concept or natural-language description — full sentences are fine (e.g. "tools that extend the mind beyond the body"). Unlike search_translations, this does NOT require words that appear in the corpus.
year_toNoRestrict to books published in or before this year.
languageNoFilter by a single original language
languagesNoFilter to any of these languages, e.g. ["Sanskrit", "Arabic", "Chinese"]. Use instead of language when targeting multiple traditions.
year_fromNoRestrict to books published in or after this year (filters out modern editions and translations).
max_per_bookNoCap on passages from any single book. Useful when one book dominates the conceptual neighborhood; set to 1–2 for diverse author/work coverage.
exclude_languagesNoExclude these languages, e.g. ["Latin", "French", "German", "English"] to surface non-Western sources.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, idempotentHint, destructiveHint false), the description reveals ranking via cosine similarity on 768d Gemini embeddings, similarity calibration thresholds (0.70+ strong, etc.), snippet_type restriction to 'translation' only, and advice on max_per_book usage. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While long, every sentence is informative and front-loaded with purpose and usage. The cross-cultural tip might be slightly tangential but adds completeness. Efficiently structured for its content density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains what to expect: similarity scores, snippet types, and cross-cultural vocabulary. Covers ranking method, calibration, and practical tips for diverse results. Fully prepares the agent for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. However, description adds value: explains query accepts full sentences and contrasts with search_translations; clarifies max_per_book purpose; gives examples for language filters; adds context for year_from/year_to. Exceeds baseline but not maximum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs 'Conceptual / semantic passage search across the whole library', distinguishing it from sibling tools like search_translations (literal phrases) and get_book (specific author/work). The purpose is unambiguous and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: use when concept matters more than wording; prefer get_book for specific authors/works; prefer search_translations for literal phrases. Also includes cross-cultural tips for pre-modern/non-Western topics, aiding selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_imagesA
Read-onlyIdempotent
Inspect

Search 90,000+ historical illustrations, emblems, engravings, diagrams, AND 23,000+ artworks (paintings, prints, sculptures). Filter by type, subject, figure, symbol, year.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoImage type (woodcut, engraving, emblem, diagram)
limitNoMax results (default 20, max 50)
queryNoText search (e.g., "ouroboros", "tree of life")
figureNo
symbolNo
book_idNo
subjectNo
year_toNo
year_fromNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations (readOnlyHint=true, destructiveHint=false, idempotentHint=true) indicate a safe, read-only operation. The description adds value by specifying the corpus size (90k+ illustrations, 23k+ artworks), which aids the agent in understanding scope. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose and key filters with no wasted words. It is appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description could explain return format/pagination, but it does not. It partially covers parameters but leaves important ones (book_id, year ranges) undocumented. For a search tool with 9 params and no required fields, the description is somewhat incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 9 parameters with only 33% description coverage. The description lists filterable fields (type, subject, figure, symbol, year) but omits book_id, limit, and query. While it adds meaning beyond the schema for some parameters, it does not fully compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches 90,000+ historical illustrations and 23,000+ artworks, specifying the types of images and filterable fields. This distinguishes it from sibling tools like search_library (which likely searches texts) and search_concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for searching images with filters, but does not explicitly compare to siblings or state when not to use it. The context signals show siblings are book-centric, so the purpose is clear for image searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_libraryA
Read-onlyIdempotent
Inspect

Find BOOKS matching a topic. Searches titles, authors, subjects, and (as a secondary signal) translated text. Use this when you want a list of works on a subject. For locating specific passages inside books, use search_translations instead. ORIENTATION HINT: when the user has named a specific author or work, call get_book directly (or list_books to discover the ID) — the AI-generated book summary + chapter outline is often the right first answer and saves repeated passage hunting. Query tips: single distinctive words or short phrases work best ("memory palace", "ouroboros"); quoted phrases match exactly. Each result includes total_matches (full count) + returned (this page) + offset for pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNo
limitNoMax results per page (default 10, max 100)
queryYesSearch query — prefer single distinctive concepts ("alchemy", "tree of life") over long natural-language phrases. Wrap in "double quotes" for exact phrase.
offsetNoPagination offset (use with limit to page through total_matches; default 0)
year_toNoPublication year range end
languageNoFilter by original language (e.g., Latin, German, Greek)
year_fromNoPublication year range start
has_translationNoOnly return books with translations
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint, destructiveHint, idempotentHint, openWorldHint. Description adds that translated text is a 'secondary signal', query tips (single words, quoted phrases), and pagination fields (total_matches, returned, offset). Does not contradict annotations; adds useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose and usage guidelines. Each sentence adds value: purpose, fields, when to use, orientation hint, query tips, pagination. Could be slightly more concise but structure is logical.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, high schema coverage, good annotations, and no output schema, the description covers return value structure (total_matches, returned, offset) and search behavior. For a search tool, this is adequately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 88%, so baseline 3. Description adds extra meaning for query parameter (prefer single concepts, quote exact phrases) and explains pagination fields beyond schema descriptions. For other params, schema already covers. Description compensates for coverage gap slightly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Find BOOKS matching a topic' and specifies search fields (titles, authors, subjects, translated text). It distinguishes from sibling tool search_translations (for passages). Verb 'Find' and resource 'BOOKS' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use this tool ('when you want a list of works on a subject') and when not to (for passages use search_translations). Also provides orientation hint to use get_book or list_books for specific author/work. Clear alternatives given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_translationsA
Read-onlyIdempotent
Inspect

Find specific PASSAGES inside books — returns page-level snippets with citation URLs. Use this when you want a quote or evidence on a topic across the whole library. ORIENTATION HINT: if the user has named a specific author or work, prefer get_book (returns a summary + chapter outline) over passage hunting — every book in the corpus has an AI-generated summary that is usually the right first read. Use search_translations when sweeping across many books for evidence of a theme. For finding which BOOKS cover a topic, use search_library. Query tips: single distinctive terms ("memory palace", "wax tablet") work best; multi-word natural-English queries ("unity of the intellect") may return fewer results because matching is term-based, not phrase-based. Each snippet has a snippet_type — "translation"/"ocr" means it is a verbatim extract from the source text; "summary" means it is AI-generated description (do not quote those as the author's words). Response includes total_matches, returned, and offset for pagination. Cross-cultural tip: for pre-modern or non-Western topics, search source-tradition vocabulary rather than modern English terms — e.g. for seminal economy search "jing" or "bindu" or "istimnāʾ", not "semen retention"; for female homoeroticism search "tribade" or "sahq", not "lesbian". The corpus is indexed via period translations that use tradition-internal terminology.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results per page (default 20, max 50)
queryYesSearch term — prefer single distinctive concepts ("harmony of the spheres", "active intellect") over long natural-language phrases. Multi-word queries match all terms (not phrase); wrap in "double quotes" for exact phrase.
offsetNoPagination offset (use with limit to page through total_matches; default 0)
book_idNoSearch within a specific book
year_toNo
languageNoFilter by a single original language
languagesNoFilter to any of these languages, e.g. ["Sanskrit", "Arabic", "Chinese"]. Use instead of language when targeting multiple traditions.
year_fromNo
exclude_languagesNoExclude these languages, e.g. ["Latin", "French", "German", "English"] to surface non-Western sources.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, destructiveHint, idempotentHint. Description adds behavioral context: snippet types (translation/ocr vs summary), matching behavior, pagination response fields, and source-tradition vocabulary tips. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with purpose and usage. While somewhat long, every sentence adds value, covering query tips, snippet types, pagination, and cultural context. Could be slightly more concise but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters, no output schema, and the task's complexity, the description is very complete: covers return format, pagination, snippet types, query behavior, and cultural nuances. No output schema but response fields are described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 78%. Description adds query tips (single distinctive terms work best), explains pagination parameters (limit, offset), and clarifies language filters (use 'languages' for multiple traditions, exclude_languages to surface non-Western).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it finds passages inside books, returning page-level snippets with citation URLs. It distinguishes itself from siblings like get_book (summary+outline) and search_library (which books).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (sweeping across many books for evidence), when not to (prefer get_book for specific author/work), and mentions alternatives. Provides query tips and cross-cultural hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_within_bookA
Read-onlyIdempotent
Inspect

Deep-dive inside a single book. Runs Atlas keyword search AND scoped semantic search in parallel against that book's pages, then merges results — so this works for both literal terms ("ouroboros") and conceptual queries ("the marriage of opposites"). Typical workflow: use search_library or search_concept to find a candidate book; then call this with that book_id to surface every relevant page. Faster than re-searching globally because it's scoped to one book's 100-500 pages. Returns OCR and translation snippets with page numbers, ready to cite.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
book_idYesThe book ID to search within
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only and idempotent, so the description's job is lighter. The description adds operational details: runs parallel keyword and semantic search, merges results, works for literal and conceptual queries, and returns OCR/translation snippets with page numbers. This provides useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is five sentences long, efficiently front-loading the core purpose and mechanism. It flows logically from purpose to capability to workflow to performance to output. Every sentence adds essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 required params, no output schema), the description is remarkably complete. It explains the dual search mechanism, the type of queries supported, the typical workflow, the performance benefit, and the return format (snippets with page numbers). No gaps remain for an agent to wonder about.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with basic descriptions (100% coverage). The description adds value by explaining how they are used: query can be literal or conceptual, and book_id is the target from previous searches. This enriches the parameter semantics beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a deep-dive inside a single book by running both keyword and semantic search, merging results, and returning snippets with page numbers. It distinguishes itself from siblings like search_library and search_concept by specifying the typical workflow and noting it's faster and scoped to a single book.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use this tool: after finding a candidate book via other search tools. It contrasts with global searching and provides a clear workflow: use search_library or search_concept first, then call this with book_id. It also notes the scope limitation (100-500 pages) which guides appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_feedbackAInspect

Submit feedback, bug reports, or feature requests to the Source Library team.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
emailNo
messageYesYour feedback (2-5000 chars)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are all false, so the description carries full burden. It only states 'Submit' implying a write operation, but lacks details on behavioral traits like whether submissions are stored, if confirmation is provided, or any rate limits. The description adds minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and concise. It efficiently captures the core action, though it could benefit from additional details without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple submission tool with no output schema and one required param, the description covers the basic action. However, it lacks information on response behavior, error handling, or what happens after submission. It is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 33% (only 'message' has a description). The description does not explain the purpose of 'name' and 'email' parameters beyond what is in the schema. It misses opportunity to clarify usage (e.g., optional sender info).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Submit') and the resource ('feedback, bug reports, or feature requests' to the 'Source Library team'). It distinguishes itself from sibling tools, which are all read/search operations (e.g., get_book, list_books).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The context implies this tool is for submitting feedback, while siblings are for retrieving data, so the usage is clear. However, no explicit exclusions or alternatives are mentioned, such as when not to use it or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.