Skip to main content
Glama

Server Details

Search and cite the full Pāli Canon (Tipiṭaka, ~444K segments) — Sutta, Vinaya, Abhidhamma at parity with SuttaCentral. Hybrid search, full-sutta fetch, translation comparison, Pāli word lookup. Free, non-commercial, offered as Dhamma Dāna.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes, but the three search tools (search_by_keyword, search_hybrid, search_semantic) could confuse an agent as they overlap in functionality, though descriptions help differentiate them.

Naming Consistency5/5

All tool names follow a consistent snake_case verb_noun pattern (e.g., get_sutta, list_editions, search_by_keyword), with clear groups for retrieval, listing, and search operations.

Tool Count5/5

With 10 tools covering retrieval, search, reference, dictionary, morphological analysis, and structure, the count is well-scoped for a Pali Canon server—neither too few nor excessive.

Completeness4/5

The tool surface covers key use cases like sutta retrieval, search, comparison, citations, and word analysis. Minor gaps exist (e.g., no dedicated segment-level retrieval, no commentary lookup), but it's largely complete for typical reading/citation workflows.

Available Tools

10 tools
compare_translationsAInspect

เปรียบเทียบคำแปลทุกฉบับที่มีสำหรับ segment เดียวกัน

💡 ใช้ tool นี้เมื่อ:

  • User ถามความหมาย/การแปลของบรรทัดเดียวจากบาลี ที่อยากเทียบหลายผู้แปล

  • ตรวจสอบว่าผู้แปลแต่ละคนตีความต่างกันยังไง (เช่น คำเทคนิค dukkha, anattā, nibbāna มี nuance ในการแปลต่างกัน)

  • งานวิชาการที่ต้อง quote multiple translations

🔍 vs get_sutta: tool นี้ targets 1 segment (line-level), ส่วน get_sutta targets ทั้งสูตร. ถ้าอยากเทียบทั้งสูตร ต้องเรียก compare_translations หลาย segment

📋 Format ของ segment_id: <sutta_id>:<paragraph>.<line> เช่น mn1:171.4 (Mūlapariyāyasutta paragraph 171 line 4 — "Nandī dukkhassa mūlaṁ"). หา segment_id จาก get_sutta หรือ search results

⚠️ State ปัจจุบัน: Translation table ยังว่าง (DB load เฉพาะ default Pāli+English จาก bilara). total_editions มักเป็น 0; text_pali กับ text_english ใช้ได้เสมอ. Thai editions เพิ่มทีหลัง

ParametersJSON Schema
NameRequiredDescriptionDefault
segment_idYesรหัส segment เช่น "mn26:8.2", "dn22:17.1", "mn62:5.3"

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses important behavioral traits: the current state (translation table empty, only default Pali+English loaded, total_editions often 0, text_pali and text_english always available). This goes beyond any missing annotations and helps set expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with emojis and clear sections (when to use, vs get_sutta, format, current state). It is front-loaded with the core purpose. While slightly long, every sentence adds value and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (true), the description appropriately focuses on usage context and limitations. It explains the tool's scope, parameter format, and current data state, providing sufficient completeness for an AI agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes segment_id with an example. The description adds value by explaining the format (<sutta_id>:<paragraph>.<line>) and noting that segment_id can be obtained from get_sutta or search results. This enhances comprehension beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'เปรียบเทียบคำแปลทุกฉบับที่มีสำหรับ segment เดียวกัน' (compare all translations for a single segment). It specifies the resource (segment) and distinguishes itself from the sibling tool get_sutta by targeting line-level rather than whole sutta.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use scenarios (e.g., user asks meaning of a single line, checking translator differences, academic work) and contrasts with get_sutta. It also explains the format of the segment_id parameter and where to find it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_referenceAInspect

สร้างข้อมูลอ้างอิง (citation) ที่ถูกต้องสำหรับสูตร

💡 ใช้ tool นี้เมื่อ:

  • User ขอ citation สำหรับงานวิชาการ/บทความ/อ้างอิง

  • ต้องการรู้ตำแหน่งในพระไตรปิฎก (ปิฎก/นิกาย) ของสูตร

  • ต้องการ formatted citation string พร้อมใช้

🔗 vs get_sutta: tool นี้ return เฉพาะ metadata + citation ไม่มี segments; ใช้คู่กับ get_sutta เมื่ออยากได้ทั้งเนื้อหา + citation

ParametersJSON Schema
NameRequiredDescriptionDefault
sutta_idYesรหัสสูตร เช่น "mn1", "dn22", "sn56.11"

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses that the tool returns only metadata and citation, implying read-only behavior. Does not mention potential side effects, but for a citation generator this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise, well-structured with bullet points and emojis, no redundant sentences. Front-loads the primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has output schema and simple input (one parameter). Description covers purpose, usage, and sibling comparison. Lacks error handling or prerequisite info but is generally complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of sutta_id including examples. Description does not add extra parameter meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clearly that the tool creates correct citations for suttas. Distinguishes from sibling get_sutta by specifying it returns only metadata and citation, not segments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to use (citation requests, need position in Tripitaka, need formatted citation) and contrasts with get_sutta for combined content+citation needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_suttaAInspect

ดึงเนื้อหาสูตร/กัณฑ์ตาม ID — return เนื้อหาเต็มทุก segment

ใช้รหัสมาตรฐาน SuttaCentral เช่น:

  • mn1 = มัชฌิมนิกาย สูตรที่ 1 (Mūlapariyāyasutta — มูลปริยายสูตร, 334 segments)

  • dn22 = ทีฆนิกาย สูตรที่ 22 (Mahāsatipaṭṭhānasutta — มหาสติปัฏฐาน, 454 segments)

  • dn16 = ทีฆนิกาย สูตรที่ 16 (Mahāparinibbānasutta — สูตรยาวที่สุด 1,664 segments)

  • sn56.11 = สังยุตต์ 56.11 (Dhammacakkappavattana — ธัมมจักกัปปวัตตนะ)

  • mn62 = มัชฌิมนิกาย 62 (Mahārāhulovāda — สอนพระราหุล)

  • dhp1-20 = Dhammapada verses 1-20 (KN ใช้ range format)

  • mil3.1.1 = Milindapañha 3.1.1 (paracanonical, 3-4 level id)

💡 คำแนะนำสำหรับ AI client:

  • Quote text_pali / text_english โดยตรงจาก segment — อย่าดึงจาก training memory. ระบบ verify ได้, AI หลายครั้งจำผิดได้

  • Segment สั้นๆ ลงท้ายด้วย :0.1 หรือ :0.2 มักเป็น header (ชื่อ นิกาย/สูตร) ไม่ใช่เนื้อหา teaching จริง — เริ่ม content จาก :1.1

  • Segment ที่ลงท้ายด้วย "...niṭṭhitaṁ" (เช่น mn1:194.10 = "Mūlapariyāyasuttaṁ niṭṐhitaṁ paṭhamaṁ") เป็น colophon ปิดสูตร

  • Segments ที่มี …pe… (peyyāla = เปยยาล) คือ abbreviated repetition ไม่ใช่ข้อมูลขาดหาย — ตำราบาลีย่อด้วยวิธีนี้

  • response มี cross_reference field — render เป็น markdown clickable ใน reply เพื่อให้ user verify ต้นฉบับได้

Coverage (v1.1+): ครบ 3 ปิฎก เทียบเท่า SuttaCentral bilara-data

  • Sutta Piṭaka (DN/MN/SN/AN/KN): ✅ ครบ Pāli + Sujato EN (5,791 sections)

  • Vinaya Piṭaka: ✅ ครบ Pāli + Brahmali EN — ใช้ SC codes เช่น pli-tv-bu-vb-pj1 (ปาราชิก ๑), pli-tv-bi-vb-pj1 (ภิกขุนี), pli-tv-kd1 (มหาวรรค), pli-tv-pvr10 (ปริวาร), pli-tv-bu-pm (ภิกขุปาฏิโมกข์)

  • Abhidhamma Piṭaka: ✅ ครบ 7 books (ds, vb, dt, pp, kv, ya, patthana) — Pāli only (bilara ไม่มี EN ทุก translator)

ParametersJSON Schema
NameRequiredDescriptionDefault
editionNoฉบับแปลภาษาไทย — "dhiranandi", "jayasaro", "mbu", "royal" หรือ None. ถ้าไม่ระบุ จะใช้ text_thai จาก bilara-data ⚠️ ปัจจุบัน DB ไม่มีฉบับแปลไทย → ทุกค่ามักเป็น null
languageNoภาษาที่ต้องการ — "pali", "thai", "english", หรือ "all" (default: "pali"). Thai ปิดอยู่ใน server ปัจจุบัน → return nullpali
sutta_idYesรหัสสูตร เช่น "mn1", "dn22", "sn56.11", "dhp1-20"

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It explains that the tool returns full segment content, notes the presence of cross_reference fields for verification, and discloses limitations (e.g., Thai translation currently unavailable). It could be more explicit about idempotency and error handling, but overall provides substantial behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy and includes a coverage section that may be redundant for a tool description. While well-structured with clear sections, it could be more concise by trimming examples and moving some contextual info to documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 params, output schema, diverse usage scenarios), the description covers most aspects: input format, output expectations, behavioral notes, and coverage. It lacks mention of error handling or behavior for invalid IDs, but is otherwise thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by providing extensive examples for sutta_id, clarifying the format and valid values, and noting current limitations for edition and language parameters (e.g., Thai translation not available). This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: retrieving full sutta content by standard ID. It provides numerous examples and distinguishes itself from search tools by focusing on direct ID-based retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a dedicated 'คำแนะนำสำหรับ AI client' section with practical tips on interpreting segments, quoting directly, and handling special cases like headers and colophons. However, it lacks explicit guidance on when to use this tool vs. sibling tools like search_by_keyword.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_word_definitionAInspect

ดึงความหมายพจนานุกรมของคําศัพท์บาลี พร้อมด้วยตัวอย่างประโยคบริบทในพระสูตร

ใช้เป็น Pali Dictionary Bridge เพื่อทำความเข้าใจความหมายแท้จริงของคำ โดยนำเสนอ "นิยาม" ควบคู่กับ "บริบทที่พระพุทธองค์ทรงใช้จริง"

📖 เกี่ยวกับฐานข้อมูลพจนานุกรม: Tool นี้ใช้พจนานุกรมต้นฉบับหลายเล่ม รวมถึง "พจนานุกรมพุทธศาสน์ ฉบับ ประมวลศัพท์" ของสมเด็จพระพุทธโฆษาจารย์ (ป. อ. ปยุตฺโต) ที่เป็นภาษาไทย — เนื้อหาเหล่านี้เป็น ผลงานต้นฉบับวิชาการที่สมบูรณ์อยู่แล้ว (ไม่ใช่ คำแปล) จึง เปิดให้ใช้ได้เสมอ แม้ ENABLED_LANGUAGES จะปิดภาษาไทย. AI client ควรแปลเนื้อหาผลลัพธ์ภาษาไทยเป็นภาษาผู้ใช้เองหากจำเป็น.

ParametersJSON Schema
NameRequiredDescriptionDefault
wordYesคำที่ต้องการค้นหา (เช่น "dukkha", "กฐิน")
languageNoภาษาของพจนานุกรม (เช่น "en", "thai", หรือ "all" เป็นค่าเริ่มต้น)all
limit_contextNoจำนวนตัวอย่างประโยคในพระสูตรที่จะแสดง (1-5)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the burden of disclosure. It explains that the tool returns definitions and sutta context, and notes that the source is original academic work always accessible. It also alerts that results may be in Thai and need translation. No destructive behavior is mentioned, which is appropriate for a read tool. Minor omission: no mention of error handling or missing words.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description starts with a clear one-sentence purpose statement, followed by a usage line and an explanatory paragraph about the dictionary database. The extra paragraph is useful but slightly wordy. The use of emojis adds visual structure but is not essential. Overall it is well-structured and informative, though could be trimmed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (not shown), the description does not need to detail return values. It covers the main purpose, data source, and behavior regarding Thai content. It lacks information about error handling or word availability, but for a dictionary lookup tool, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add significant value beyond the schema; for example, it doesn't enumerate available languages or clarify how limit_context affects output. The parameter descriptions in the schema are already clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves dictionary definitions of Pali words along with contextual example sentences. It provides a specific verb ('ดึงความหมาย' – retrieve meaning) and resource ('พจนานุกรมของคำศัพท์บาลี'), and distinguishes it from sibling tools like search_by_keyword by focusing on definitions with context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly positions the tool as a 'Pali Dictionary Bridge' for understanding true meaning, implying it is for definitions rather than full-text search. It notes that the dictionary content is always available even if Thai is disabled, guiding multilingual use. However, it does not explicitly contrast with sibling tools or state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_editionsAInspect

แสดงรายการฉบับแปลที่มีในระบบ พร้อมสถิติ coverage

💡 ใช้ tool นี้เมื่อ:

  • ก่อนเรียก compare_translations หรือ get_sutta(edition=...) — เพื่อรู้ว่าใช้ค่า edition อะไรได้บ้างและฉบับไหนคุ้มเทียบ

  • User ถามว่ามีฉบับแปลใดบ้างใน DB

🔍 กรอง: Tool นี้ filter ตาม TRIPITAKA_ENABLED_LANGUAGES ของเซิร์ฟเวอร์ — Thai disabled → return empty list. ทำงานเฉพาะภาษาที่เปิดอยู่

⚠️ State ปัจจุบัน: DB ส่วนใหญ่มีแต่ Pāli (default จาก SuttaCentral bilara) + English (Sujato). Thai editions (dhiranandi, jayasaro, mbu, royal) ยังไม่ได้ index — return empty จนกว่าจะ load

Returns: list ของ edition object แต่ละตัวมี: - edition: รหัสฉบับ เช่น "sujato", "dhiranandi", "mbu" - translator: ชื่อผู้แปล - language: รหัสภาษา ISO ("pi", "en", "th") - segment_count: จำนวน segments ที่มีคำแปลใน edition นี้ - sutta_count: จำนวนสูตรที่มีคำแปล

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: filtering by enabled languages, current indexing state for Thai editions, and the exact structure of returned objects. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with emojis and sections, but slightly verbose. Front-loads key information, though could be more concise without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete coverage for a no-parameter tool with described output schema. Includes filtering logic, state caveats, and return structure, leaving no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline is 4. Description adds meaning by explaining output fields and behavior, but no parameter semantics are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool lists editions with coverage statistics, with specific verb 'แสดงรายการ' (list) and resource 'ฉบับแปล' (editions). It distinguishes itself by providing usage context relative to sibling tools like compare_translations and get_sutta.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (before compare_translations or get_sutta, when user asks about editions) and provides filtering behavior based on server configuration. Also notes current state limitations, giving clear guidance on expected results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_structureAInspect

แสดงโครงสร้างพระไตรปิฎกทั้ง 3 ปิฎก พร้อมสถิติ coverage

💡 ใช้ tool นี้เมื่อ:

  • User ถามภาพรวมพระไตรปิฎก (มีอะไรบ้าง / นิกายอะไร)

  • ตรวจ coverage ก่อนสัญญาว่าจะค้นได้ — ดู segment_count > 0 เป็นตัว ตัดสินว่า sub-collection นั้นโหลดแล้ว

  • Verify scope สำหรับการ compile artifact

📊 State ปัจจุบัน v1.1+ (เทียบเท่า SuttaCentral bilara-data):

  • Sutta Piṭaka ครบ: DN 37, MN 155, SN 1,829, AN 1,419, KN 2,351 sections (~284,702 segments รวม) — Pāli + Sujato EN

  • Vinaya Piṭaka ครบ: Bhikkhu Vibhaṅga 222, Bhikkhunī Vibhaṅga 127, Khandhaka 22, Parivāra 51 + Pātimokkha 2 (~71,557 segs) — Pāli + Brahmali EN

  • Abhidhamma Piṭaka ครบ: 7 books (ds, vb, dt, pp, kv, ya, patthana) ~88,414 segs — Pāli only (bilara ไม่มี EN ทุก translator)

  • รวม ~444,673 segments ใน DB

⚠️ Quirks ที่ยังอยู่:

  • Schema มี duplicate codes legacy + SC modern ใช้ co-exist:

    • Vinaya: vin-v/vin-m/vin-c/vin-p (legacy, segment_count = 0) คู่กับ pli-tv-bu-vb/pli-tv-bi-vb/pli-tv-kd/pli-tv-pvr (active, มี segments)

    • Abhidhamma: ym/pt (legacy = 0) คู่กับ ya/patthana (active)

  • เลือก code ที่ segment_count > 0 ตอนใช้งาน — ตัวอื่นเป็น metadata placeholder จาก migration เก่า

🌐 ภาษา: ส่งกลับ Pāli + Thai + English labels เสมอ (metadata ไม่ใช่ segment text); text content ตามภาษาที่ ENABLED_LANGUAGES บอก. ตอนนี้ ฉบับแปลไทยใน DB ยังไม่มี — Thai user ใช้ cross_reference 84000.org เพิ่ม

Returns: โครงสร้างแบบ hierarchical: - pitakas{vinaya/sutta/abhidhamma} → nikayas[] - แต่ละ nikaya: code, name (3 ภาษา), sutta_count, segment_count

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description fully bears the burden of behavioral disclosure. It details important quirks such as duplicate legacy and modern codes, instructs to use codes with segment_count > 0, notes language support (Pāli, Thai, English labels) and the absence of Thai translations. This is comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is somewhat lengthy but well-structured with bullet points, emoji markers, and clear sections. Every sentence adds value. Minor improvement could be trimming redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and an explicit output schema described in the 'Returns' section, the description is complete. It explains the hierarchical structure, language handling, known quirks, and current data state, leaving no ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so the description cannot add parameter-specific meaning. With 0 parameters, the baseline is 4. The description focuses on output and behavior instead, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: showing the hierarchical structure of the Tripitaka with coverage statistics. It specifies the three pitakas and what each contains, making it distinct from sibling tools like get_sutta or search_by_keyword.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists three scenarios when to use this tool: for overview questions about the Tripitaka, checking coverage before promising search capabilities, and verifying scope for artifact compilation. This provides clear context and implicitly suggests when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_pali_wordAInspect

วิเคราะห์คำบาลีเพื่อหารากศัพท์ (Stemming / Lemmatization เบื้องต้น)

💡 ใช้ tool นี้เมื่อ:

  • เจอคำบาลีในข้อความ (เช่น dukkhassa, bhikkhūnaṁ) แล้ว get_word_definition หาไม่เจอ — Pāli inflect คำตามวิภัตติ ๘ × วจน ๒ = ๑๖ form ต่อราก

  • ต้องการแยก compound word (sammāsambuddhassasammā + sambuddha

    • -ssa genitive)

  • ดู possible stems ก่อนค้นต่อใน get_word_definition

🔄 Workflow แนะนำ: parse_pali_word(inflected_form) → ได้ possible_stems[] → เรียก get_word_definition(stem) ทีละ stem จนเจอ definition

⚠️ ข้อจำกัด:

  • เป็น rule-based เบื้องต้น — ตัด common suffixes (case endings, vowel shortening) ไม่ใช่ full morphological analyzer

  • Compound words (samāsa) ไม่ได้แยก — เช่น dukkhanirodha ไม่ตัดเป็น dukkha + nirodha

  • ไม่จับ sandhi (เสียงเชื่อม) เช่น tena ahaṁ → tenāhaṁ

  • ผลลัพธ์เป็น possible stems — ต้อง verify ผ่าน get_word_definition

ParametersJSON Schema
NameRequiredDescriptionDefault
wordYesคำบาลีที่ inflected (เช่น "dukkhassa", "bhikkhūnaṁ", "sīlavā")

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavior. It explains the rule-based nature, limitations (no sandhi, no compound splitting), and that results are possible stems. Could mention idempotency or error handling, but adequate for a parse tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with emoji headings, bullet points, and workflow. Informative but not overly verbose. Every sentence adds value. Slightly long but justified by complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (Pali grammar), the description is thorough: covers purpose, when to use, workflow, limitations. Output schema exists, so return values need not be detailed. Contextually complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'word' parameter. The description adds examples but doesn't provide additional semantic constraints or format details beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing Pali words for stemming/lemmatization. It provides examples (dukkhassa, bhikkhūnaṁ) and distinguishes from sibling tool get_word_definition by specifying when to use which.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance on when to use: when get_word_definition fails, for compound words, and to see possible stems. Recommends a workflow: parse then call get_word_definition. Also lists limitations, preventing misuse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_keywordAInspect

ค้นหาข้อความในพระไตรปิฎกด้วย keyword

ค้นหาแบบ trigram (word similarity) บนภาษาที่เปิดใช้งานในเซิร์ฟเวอร์. สามารถกรองผลลัพธ์ตามปิฎกและฉบับแปลได้.

💡 คำแนะนำสำหรับ AI client: Canonical reference ของระบบคือบาลีโรมัน (จาก SuttaCentral). ถ้า user ถามเป็นภาษาที่ปิด (หรือไม่อยู่ใน supported set) ให้แปล keyword เป็น บาลีโรมัน (preferred) หรืออังกฤษก่อนเรียก tool นี้ — เช่น "ทุกข์" → "dukkha", "อานาปานสติ" → "ānāpānassati". ดูภาษาที่ใช้ได้ใน server instructions ด้านบน.

🔍 เลือก tool ค้นหาให้เหมาะกับงาน:

  • หาคำเป๊ะ (term lookup) — เช่น "appearances of ānāpānassati": ใช้ tool นี้ ดี เพราะ trigram match ตรงคำสุด

  • หา "เนื้อหาเรื่อง X" (concept search) — เช่น "discourses about mindfulness of breathing": ใช้ search_hybrid แทน เพราะ canonical Pāli มีลักษณะที่ keyword search หา concept ได้ไม่ครบ: • คำสำคัญในชื่อหมวด (Ānāpānapabba) ไม่ได้อยู่ในเนื้อหาคำสอน ที่ใช้ verb อื่น (assasati, passasati, dīghaṁ, rassaṁ) — เช่น DN22 Ānāpānapabba มี 16 segments แต่คำว่า ānāpāna ปรากฏแค่ 2 ที่ (header + footer) — เนื้อหาจริงจะหาไม่เจอ • Stock phrases (เช่น So satova assasati, satova passasati) ปรากฏซ้ำใน 10+ สูตร — keyword จะ rank ผลกว้าง ไม่ชี้สูตรเฉพาะ

  • ค้นทั่วไปจาก keyword เดียว — ใช้ limit≥30 แล้วกรองเอง หรือเรียกหลายคำที่เกี่ยวข้อง (root verb + noun + compound)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoจำนวนผลลัพธ์สูงสุด (default: 10, max: 50)
pitakaNoกรองตามปิฎก — "vinaya", "sutta", "abhidhamma" หรือ None (ค้นทั้งหมด) ✅ v1.1+: ทั้ง 3 ปิฎกครบ (Sutta + Vinaya + Abhidhamma) เทียบเท่า SuttaCentral bilara — ดู list_structure ตัวเลขสด
editionNoฉบับแปลภาษาไทย — "dhiranandi", "jayasaro", "mbu", "royal" หรือ None (ใช้เฉพาะเมื่อ language="thai" และ Thai เปิดอยู่)
keywordYesคำที่ต้องการค้นหา
languageNoภาษาที่ค้นหา — ต้องอยู่ใน ENABLED_LANGUAGES ของเซิร์ฟเวอร์ (default: "pali"). ภาษาที่ปิดอยู่จะ return error.pali

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It explains trigram matching, language dependency, filtering options, and error behavior for disabled languages. It does not cover pagination but mentions limit defaults and max.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-structured with sections and bullet points. It front-loads purpose and then gives usage advice. Each sentence adds value, though it could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters and an output schema, the description covers all critical aspects: when to use, how to use, comparisons to siblings, behavioral traits, and parameter semantics. It is comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds significant meaning: keyword translation advice, pitaka scope with reference to list_structure, edition conditionality on language, and language default/error handling.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches the Tripitaka by keyword using trigram similarity. It distinguishes itself from sibling tools like search_hybrid, explicitly noting this is for term lookup, not concept search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: use for term lookup, use search_hybrid for concept search. Also advises on language translation to Pali Roman or English if the user's language is not supported.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_hybridAInspect

ค้นหาแบบผสมผสาน (Hybrid Search) — รวมพลัง Keyword + Semantic

ใช้เทคนิค RRF (Reciprocal Rank Fusion) เพื่อนำผลลัพธ์จาก การค้นหาคำตรงๆ มารวมกับผลลัพธ์จากการค้นหาความหมาย — เป็น tool ที่แนะนำสำหรับ "หาเนื้อหาเรื่อง X" เพราะ semantic ช่วย จับสูตรที่พูดถึง concept เดียวกันแม้ใช้คำต่างกัน (เช่น คำสอน อานาปานสติบางสูตรใช้ assasati/passasati/dīghaṁ แทน ānāpānassati).

💡 คำแนะนำสำหรับ AI client:

  • Query ภาษาอังกฤษมักได้ผลดี (เช่น mindfulness of breathing) เพราะ embedding model เป็น multilingual แต่ tuned สำหรับ EN เป็นหลัก

  • Stop word ภาษาไทยอ่อน — ถ้า query ไทยไม่ได้ผลดี ให้ AI client แปลเป็นบาลี/อังกฤษก่อน (ดู server instructions)

  • default limit=5 มักน้อยเกินสำหรับ topic survey — ถ้าต้องการ coverage ดี ใช้ limit=15-20 (max 20)

  • Ranking ตาม similarity ไม่ใช่ canonical importance — สูตรหลัก (locus classicus) เช่น MN118, DN22 อาจ rank ต่ำกว่าสูตรเล็ก ถ้าสูตรเล็กมีคำเป๊ะกว่า. ใช้ผลลัพธ์เป็น "starting point" แล้วต่อด้วย get_sutta สำหรับสูตรเฉพาะที่เป็น canonical reference

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoจำนวนข้อความที่ต้องการค้นพบ (default 5, max 20)
queryYesข้อความ (ภาษาไทย, บาลี หรืออังกฤษ — อังกฤษให้ผลดีสุด)
languageNoภาษาที่ต้องการให้แสดงในผลลัพธ์ ("pali", "thai", "english", "all")pali

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains the RRF fusion technique, ranking behavior, and limitations like language effectiveness. It does not explicitly state non-destructive nature, but as a search tool this is implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections and bullet points, front-loading the purpose. It is somewhat verbose but each sentence adds value, though could be tightened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of hybrid search and existence of output schema, the description covers usage tips, limitations, and technical details thoroughly, enabling effective tool usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds significant value: for query it explains language effectiveness, for limit it recommends higher values for surveys, and for language it clarifies options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a hybrid search combining keyword and semantic search using RRF. It identifies the use case ('recommended for finding content about topic X') but does not explicitly differentiate from sibling tools like search_by_keyword and search_semantic, though the combination is implied.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides specific recommendations for AI clients: using English queries, handling stop words, adjusting limit for coverage, and understanding ranking is similarity-based not canonical. It lacks explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_semanticAInspect

ค้นหาแบบ semantic — ค้นหาตามความหมาย ไม่จำเป็นต้องตรงคำ

ใช้ vector similarity search (cosine distance) บน text_pali ที่ embed ด้วย multilingual MiniLM model.

🤔 ส่วนใหญ่คุณควรใช้ search_hybrid แทน — มันรวม semantic นี้กับ keyword search แล้ว ranking ดีกว่า. ใช้ tool นี้เฉพาะเมื่อ:

  • ต้องการ pure semantic (ไม่ต้องการ keyword influence)

  • อยาก tune threshold ละเอียด (hybrid ใช้ RRF ปรับยาก)

  • debug ดูว่า semantic จับอะไรได้บ้างเทียบกับ keyword

⚠️ ข้อจำกัดที่ทราบ:

  • Index = บาลีเท่านั้น (English/Thai ใช้ได้แต่ผ่าน multilingual embedding ที่ไม่ได้ tune บน Pāli)

  • English query มัก embed ดีกว่าไทย (model tune EN เป็นหลัก)

  • คำเฉพาะตัว (appamāda, dukkha) ที่ค้นแบบ exact ดีกว่า → ใช้ search_by_keyword

  • Stock phrases บาลีปรากฏในหลายสูตร → similarity score กระจัดกระจาย, อ่าน top 10 อย่ายึดแค่ rank 1

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoจำนวนผลลัพธ์สูงสุด (default: 5, max: 20)
queryYesข้อความ (อังกฤษให้ผลดีสุด, รองมาเป็นบาลี, ไทยอ่อน)
languageNoภาษา output — "pali", "thai", "english", หรือ "all" (Thai disabled → null)pali
thresholdNocosine distance สูงสุด (น้อย=ตรงเผง). default 0.7; ลดเป็น 0.5 ถ้าอยากเข้มงวด, เพิ่มเป็น 0.9 ถ้าอยากกว้าง

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. Discloses use of cosine distance, multilingual MiniLM model, language-specific performance (English best, Thai weakest), and known limitations like index being only Pāli and dispersion of scores. Lacks explicit mention of read-only nature, but context implies no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with a concise first line, bullet points for usage and limitations. Every sentence adds value, though slightly verbose. Good front-loading of core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool complexity, 100% schema coverage, and existence of output schema, the description comprehensively covers purpose, usage guidelines, parameter semantics, and limitations. No significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds meaningful context: explains query language effectiveness, threshold meaning as cosine distance with tuning guidance, and default/max values. Exceeds baseline by providing practical usage tips.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs semantic vector similarity search on text_pali, distinguishing it from keyword search. It explicitly contrasts with sibling tools like search_hybrid and search_by_keyword.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using search_hybrid for most cases and provides specific scenarios for this tool: pure semantic search, fine-grained threshold tuning, and debugging. Also recommends search_by_keyword for exact Pāli terms.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources