Skip to main content
Glama

Server Details

Quran MCP server for translation, tafsir, mutashabihat, recitation playlists, and prayer times.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 16 of 16 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, with clear separation between interactive display tools (e.g., ayah_search, ayah_tafsir) and internal preparatory tools (e.g., lookup_tafsirs, search_ayahs_text). However, some overlap exists between ayah_mutashabihat and phrase_mutashabihat, which both handle phrase repetition analysis, potentially causing confusion if not carefully read.

Naming Consistency5/5

Tool names follow a highly consistent snake_case pattern with clear verb_noun structures (e.g., ayah_search, list_reciters, play_ayahs). The naming is uniform across all tools, making them predictable and easy to understand at a glance.

Tool Count4/5

With 16 tools, the count is slightly high but reasonable for a Quranic server covering search, tafsir, translation, audio, and prayer times. Each tool serves a specific function, though some internal tools (like lookup_* variants) could potentially be consolidated to reduce complexity.

Completeness5/5

The toolset provides comprehensive coverage for Quranic study, including search, tafsir, translation, audio playback, and prayer times. It supports both interactive displays and internal data fetching, with clear workflows for user-facing and preparatory tasks, leaving no obvious gaps in the domain.

Available Tools

16 tools
ayah_mutashabihatRepeated phrases in an ayahA
Read-only
Inspect

Show repeated phrase metadata for one ayah with an interactive display. Use this when: the user asks which phrases in a specific ayah repeat elsewhere; the user needs phrase IDs and counts before calling phrase_mutashabihat.

ParametersJSON Schema
NameRequiredDescriptionDefault
ayahYesAyah number within the selected surah.
surahYesSurah number from 1 to 114.
same_surah_onlyNoWhen true, only include repeated phrase matches found in the same surah as the input ayah.

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
phrasesYes
ayah_keyNo
surah_idNo
ayah_wordsNo
phrase_idsYes
surah_nameNo
ayah_numberNo
errorMessageNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context about the 'interactive display' behavior and the tool's role in a workflow (preparing for phrase_mutashabihat), which goes beyond what annotations provide. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured. It uses two sentences: the first states the purpose, and the second provides clear usage guidelines. Every word earns its place with no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (readOnlyHint, destructiveHint, openWorldHint), 100% schema coverage, and existence of an output schema, the description is complete. It explains the tool's purpose, usage context, and workflow role without needing to detail parameters or return values, which are covered elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema. The description doesn't add any additional parameter semantics beyond what's in the schema, but it doesn't need to since the schema is comprehensive. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Show repeated phrase metadata for one ayah with an interactive display.' It specifies the exact resource (ayah) and action (show repeated phrase metadata), and distinguishes it from sibling tools like phrase_mutashabihat by focusing on a single ayah rather than phrase-level analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidelines: 'Use this when: the user asks which phrases in a specific ayah repeat elsewhere; the user needs phrase IDs and counts before calling phrase_mutashabihat.' This gives clear scenarios for when to use this tool versus alternatives, including a specific sibling tool reference.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ayah_tafsirQuran tafsirA
Read-only
Inspect

DEFAULT tool for user-facing tafsir display. Use this for ANY user-facing request to show/see tafsir commentary on a Quran ayah — including 'show me the tafsir of…', 'what does Ibn Kathir say about…', 'explain this ayah'. This is the FINAL tool call for these requests; do not follow it with get_tafsir_text. ONLY skip this widget and use get_tafsir_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific tafsir (e.g. 'Ibn Kathir', 'Mokhtasar', 'Maarif-ul-Quran', 'Tazkirul Quran'), ALWAYS call lookup_tafsirs first to resolve the exact slug — do not guess the slug from the name. Guessed slugs fail validation. If the user only specifies a language ('English tafsir', 'Arabic tafsir'), you may pass 'languages' without a slug. Each query must include at least one of languages or tafsir_slugs. Use ayah keys in 'surah:ayah' format (for example '2:255'). Limits: max 20 queries per request and max 50 total ayah+tafsir items.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTafsir queries. Each query defines an ayah range plus tafsir slugs or languages.

Output Schema

ParametersJSON Schema
NameRequiredDescription
ayahsYes
errorNo
total_ayahsYes
errorMessageNo
tafsir_languagesYes
tafsir_slugs_usedYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: the interactive display nature, slug validation requirements ('Guessed slugs fail validation'), and specific limits ('max 20 queries per request and max 50 total ayah+tafsir items'). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. Each sentence adds value: purpose, usage context, slug handling rules, format requirements, and limits. While slightly dense, there's minimal waste - every clause serves to guide the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple query parameters, slug resolution requirements), the description provides complete contextual guidance. With annotations covering safety, schema fully documenting parameters, and an output schema existing (so return values don't need description), the description focuses appropriately on usage rules, constraints, and integration with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the 'queries' parameter and its nested properties. The description adds some semantic context about slug handling and format requirements ('Use ayah keys in 'surah:ayah' format'), but doesn't provide additional parameter meaning beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Show Quran ayah tafsir with an interactive commentary display' - a specific verb ('show') and resource ('Quran ayah tafsir') with the qualifier 'interactive commentary display'. It distinguishes from siblings like 'get_tafsir_text' by emphasizing presentation rather than raw text retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this for the final presentation of tafsir to the user' (when to use), 'ALWAYS call lookup_tafsirs first to resolve the exact slug' (when not to guess), and 'If the user only specifies a language... you may pass 'languages' without a slug' (alternative approach). It also references sibling tools like 'lookup_tafsirs' for slug resolution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ayah_translationQuran translationsA
Read-only
Inspect

DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTranslation queries. Each query defines an ayah range plus languages or slugs.

Output Schema

ParametersJSON Schema
NameRequiredDescription
ayahsYes
errorNo
total_ayahsYes
errorMessageNo
languages_usedYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: it warns about slug validation failures ('Guessed slugs routinely fail validation'), specifies unsupported languages ('Do not use 'ar'; Arabic translation is unsupported'), and mentions the interactive display aspect. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Most sentences earn their place by providing critical usage rules (e.g., slug handling, language codes). It could be slightly more concise by avoiding minor repetition (e.g., the 'ar' warning appears twice), but overall it's efficient and informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (handling multiple queries with ayah ranges, languages, and translations), the description is complete. It covers purpose, usage guidelines, behavioral nuances (slug validation, unsupported languages), and parameter semantics. With annotations covering safety and an output schema likely detailing the interactive display format, no significant gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context: it explains slug handling rules (e.g., 'en-sahih-international' vs. 'clearquran-with-tafsir'), reinforces format requirements (e.g., 'surah:ayah' format, ISO 639-1 codes), and clarifies that 'ar' is unsupported. However, it doesn't provide significant new information beyond the schema's detailed descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Show Quran ayah translations with an interactive display.' It specifies the verb ('show'), resource ('Quran ayah translations'), and presentation context ('interactive display'), distinguishing it from siblings like 'get_translation_text' (which likely provides raw text) and 'lookup_translations' (which resolves slugs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this for the final presentation of translations to the user.' It distinguishes when to use this tool versus alternatives by instructing to call 'lookup_translations' first if the user names a specific translator, and it specifies prerequisites (each query must include at least one of languages or translations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tafsir_textQuran tafsir (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to any 'show me / explain with tafsir…' request — use ayah_tafsir for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw text / text-only output (e.g. 'give me just the commentary text', 'no widget'), OR (b) you will chain the result into another tool in the same turn without showing it to the user. When in doubt, prefer ayah_tafsir. Do not follow ayah_tafsir with this tool — that is duplicated work. Each query must include at least one of languages or tafsir_slugs. Use ayah keys in 'surah:ayah' format (for example '2:255'). Limits: max 20 queries per request and max 50 total ayah+tafsir items.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTafsir queries. Each query defines an ayah range plus tafsir slugs or languages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations by specifying limits ('max 20 queries per request and max 50 total ayah+tafsir items'), which helps the agent manage request constraints. However, it does not detail response format or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and usage guidelines in the first sentence, followed by specific instructions and limits. Each sentence adds essential information without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with annotations covering safety and scope, the description is largely complete, addressing purpose, usage, and behavioral limits. However, without an output schema, it does not explain return values or error responses, leaving a minor gap in contextual understanding for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the 'queries' parameter and its nested properties. The description adds minimal semantic value beyond the schema, such as clarifying the 'surah:ayah' format with an example and noting the requirement for languages or tafsir_slugs, but does not provide additional syntax or format details. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetch tafsir commentary as plain text') and resource ('Quran tafsir'), distinguishing it from the sibling tool 'ayah_tafsir' for full interactive display. It explicitly mentions use cases like 'inline quoting, summarizing, or analysis,' making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when you need to reference or summarize tafsir content directly in your response') and when not to ('for full interactive tafsir display, use ayah_tafsir instead'). It also specifies prerequisites ('Each query must include at least one of languages or tafsir_slugs'), offering clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_translation_textQuran translations (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to any 'show me / what's the translation of…' request — use ayah_translation for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw text / text-only output (e.g. 'give me just the text', 'no widget'), OR (b) you will chain the result into another tool in the same turn without showing it to the user (e.g. summarize then call play_ayahs). When in doubt, prefer ayah_translation. Do not follow ayah_translation with this tool — that is duplicated work. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTranslation queries. Each query defines an ayah range plus languages or slugs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: it specifies format requirements ('Use ayah keys in 'surah:ayah' format'), language constraints ('Do not use 'ar'; Arabic translation is unsupported'), and input validation rules ('In queries[].languages use ISO 639-1 codes').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, provides clear usage guidelines, and includes essential constraints and examples - all in a compact paragraph where every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter with nested structure), excellent annotations, and 100% schema coverage, the description provides strong contextual completeness. It covers purpose, usage guidelines, format requirements, and constraints. The main gap is the lack of output schema, but the description compensates by specifying the return type ('plain data for inline quoting').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'queries' parameter and its nested properties. The description adds some semantic context about format examples and language restrictions, but doesn't provide significant additional parameter meaning beyond what's already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch translation text as plain data') and resources ('Quran translations'), and explicitly distinguishes it from sibling tool 'ayah_translation' by specifying use cases ('inline quoting, comparison, or analysis' vs 'full interactive translation display').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when you need to incorporate translation excerpts directly into your response text') and when to use an alternative ('For full interactive translation display, use ayah_translation instead'), along with prerequisites ('Each query must include at least one of languages or translations').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_recitersBrowse Quran recitersA
Read-only
Inspect

DEFAULT tool for user-facing reciter-listing questions. Use this for ANY user-facing query like 'what reciters are available', 'who can recite for me', 'list Quran reciters'. This is the FINAL tool call for these requests; do not follow it with lookup_reciters. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_reciters instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into another tool (e.g. play_ayahs) in the same turn without showing the list. When in doubt, use this widget.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
recitersYes
totalCountYes
errorMessageNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context about the interactive widget output format and user-facing use case, which goes beyond annotations. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences: the first establishes purpose and default usage, the second clarifies the alternative tool. Every phrase adds value with no redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters, rich annotations, and an output schema (which handles return values), the description provides complete contextual guidance. It covers purpose, usage vs alternatives, and output behavior without needing to explain parameters or return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on usage context, which is the right approach for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Browse Quran reciters' and 'Shows the catalog in an interactive widget the user can browse.' It distinguishes from sibling 'lookup_reciters' by specifying this is for user-facing queries where the list is shown to the user, not piped to another tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'DEFAULT tool for reciter-listing questions' and 'ONLY use lookup_reciters instead when you will pipe the result directly into another tool...without showing the list to the user.' It clearly defines the context and exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tafsirsBrowse tafsir collectionsA
Read-only
Inspect

DEFAULT tool for user-facing tafsir-listing questions. Use this for ANY user-facing query like 'what tafsirs are supported', 'list English tafsirs', 'which tafsir collections do you have'. This is the FINAL tool call for these requests; do not follow it with lookup_tafsirs. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_tafsirs instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into ayah_tafsir in the same turn without showing the list. When in doubt, use this widget.

ParametersJSON Schema
NameRequiredDescriptionDefault
languagesNoOptional ISO 639-1 language code filters (for example ['en', 'ar']).

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
tafsirsYes
totalCountYes
errorMessageNo
languagesFilterNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and closed-world behavior. The description adds valuable context by specifying that it 'Shows the catalog in an interactive widget the user can browse,' which clarifies the presentation format. However, it doesn't mention rate limits, authentication needs, or other behavioral traits beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the tool's purpose and usage guidelines in two efficient sentences. Every sentence earns its place by providing critical differentiation from the sibling tool and clarifying the user interaction model, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter), comprehensive annotations (read-only, non-destructive, closed-world), and the presence of an output schema, the description is complete. It adequately covers purpose, usage guidelines, and behavioral context without needing to explain return values or other details already handled by structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the optional 'languages' parameter with its ISO 639-1 code format. The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as the 'DEFAULT tool for tafsir-listing questions' and specifies it 'Shows the catalog in an interactive widget the user can browse.' It distinguishes from sibling 'lookup_tafsirs' by emphasizing user-facing queries like 'what tafsirs are supported' versus programmatic use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('for ANY user-facing query') and when to use the alternative ('ONLY use lookup_tafsirs instead when you will pipe the result directly into ayah_tafsir in the same turn without showing the list to the user'). This clearly differentiates usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_translationsBrowse Quran translationsA
Read-only
Inspect

DEFAULT tool for user-facing translation-listing questions. Use this for ANY user-facing query like 'what English translations are available', 'list French translations', 'which translators can I choose from'. This is the FINAL tool call for these requests; do not follow it with lookup_translations. Shows the catalog in an interactive widget the user can browse. Use ISO 639-1 codes like 'en', not names like 'english'. ONLY use lookup_translations instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into ayah_translation in the same turn without showing the list. When in doubt, use this widget. Returned language_name values are display labels. Rows without usable slugs are filtered out.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoOptional locale/language code for response localization (for example 'en' or 'ar').
languageNoOptional ISO 639-1 language code filter (for example 'en'). Do not pass language names like 'english'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
totalCountYes
errorMessageNo
translationsYes
languageFilterNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and closed-world behavior. The description adds valuable context: 'Shows the catalog in an interactive widget the user can browse' and 'Rows without usable slugs are filtered out.' These behavioral details aren't covered by annotations, providing useful implementation insights.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. It front-loads the primary use case, provides clear usage guidelines, and includes important behavioral details. Every sentence serves a distinct purpose in helping the agent understand when and how to use this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has comprehensive annotations, 100% schema coverage, and an output schema exists, the description provides exactly what's needed. It explains the user-facing purpose, distinguishes from alternatives, and adds behavioral context about the interactive widget and slug filtering that isn't captured elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds some value by reinforcing the ISO 639-1 code requirement for the 'language' parameter, but doesn't provide additional semantic context beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Browse Quran translations' and 'Shows the catalog in an interactive widget the user can browse.' It distinguishes from sibling lookup_translations by specifying this is for user-facing queries where the list is shown to the user, not for programmatic use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'DEFAULT tool for translation-listing questions' and 'ONLY use lookup_translations instead when you will pipe the result directly into ayah_translation in the same turn without showing the list to the user.' It clearly defines when to use this tool versus the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_recitersBrowse Quran reciters (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what reciters are available' question — use list_reciters for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into play_ayahs in the same turn without showing the raw list (e.g. user asks to play audio by a named reciter; call this to resolve reciter_id, then call play_ayahs). When in doubt, prefer list_reciters.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering the basic safety profile. The description adds valuable behavioral context beyond annotations by explaining that this is an 'INTERNAL/preparatory tool' meant for chaining results into another tool (play_ayahs) rather than displaying raw data to users. It doesn't describe rate limits or authentication needs, but adds meaningful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each earn their place: first establishes the internal/preparatory nature, second provides the critical 'never use' rule with alternative, third gives the positive usage scenario with concrete example. No wasted words, front-loaded with the most important restriction.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with comprehensive annotations (readOnly, non-destructive, closed-world) and no output schema, the description provides complete contextual information. It explains the tool's role in the workflow, when to use it versus alternatives, and how results should be handled. The lack of output schema is compensated by explaining that results feed directly into play_ayahs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4 even with no parameter information in the description. The description appropriately doesn't discuss parameters since there are none, maintaining focus on the tool's purpose and usage context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a 'preparatory tool' for 'browsing Quran reciters (data only)' and explicitly distinguishes it from its sibling 'list_reciters' by specifying that this one is for internal/chaining use while the sibling shows an interactive widget. The verb 'browse' combined with the resource 'Quran reciters' and the differentiation from alternatives makes this highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use and when-not-to-use guidance: 'NEVER use as a standalone answer... use list_reciters for that' and 'Use this ONLY when you will chain the result directly into play_ayahs in the same turn.' It names the specific alternative tool (list_reciters) and provides a concrete example scenario for proper usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_tafsirsBrowse tafsir collections (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what tafsirs are supported' question — use list_tafsirs for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into ayah_tafsir in the same turn without showing the raw list (e.g. resolve a named tafsir to its slug, then call ayah_tafsir). When in doubt, prefer list_tafsirs.

ParametersJSON Schema
NameRequiredDescriptionDefault
languagesNoOptional ISO 639-1 language code filters (for example ['en', 'ar']).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering the basic safety profile. The description adds valuable context about its internal/preparatory nature and chaining requirement, but doesn't disclose behavioral traits like rate limits, authentication needs, or what specific data is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve a distinct purpose: stating the tool's nature, providing negative guidance, and giving positive usage instructions. There's no wasted text and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's preparatory nature, single optional parameter with full schema coverage, and comprehensive annotations, the description provides sufficient context about when and how to use it. The main gap is the lack of output schema, but the description compensates by explaining the chaining purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single parameter 'languages', the schema already documents it as 'Optional ISO 639-1 language code filters'. The description adds no additional parameter information, so it meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this is an 'INTERNAL/preparatory tool' for 'browse tafsir collections (data only)', which distinguishes it from the sibling 'list_tafsirs' that shows an interactive widget. However, it doesn't specify the exact verb+resource combination beyond 'browse' and 'collections'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('ONLY when you will chain the result directly into ayah_tafsir'), when NOT to use it ('NEVER use as a standalone answer'), and names the alternative tool ('use list_tafsirs for that'). This is comprehensive usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_translationsBrowse Quran translations (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what translations are available' question — use list_translations for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into ayah_translation in the same turn without showing the raw list (e.g. resolve a named translator to the correct slug, then call ayah_translation). When in doubt, prefer list_translations. Use ISO 639-1 codes like 'en', not names like 'english'.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoOptional locale/language code for response localization (for example 'en' or 'ar').
languageNoOptional ISO 639-1 language code filter (for example 'en'). Do not pass language names like 'english'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, non-destructive, and closed-world behavior, which the description doesn't contradict. The description adds valuable context: it's an internal/preparatory tool, specifies it should be used only in chaining scenarios, and warns against using language names instead of ISO codes. However, it doesn't mention rate limits, authentication needs, or error handling, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key usage rules, uses clear bullet-like structure in a single paragraph, and every sentence adds critical information (internal use, chaining requirement, ISO code specification). There is no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple data lookup with no output schema) and rich annotations (read-only, non-destructive), the description is mostly complete. It covers purpose, usage guidelines, and parameter semantics adequately. However, it lacks details on output format or error cases, which could be useful for an agent despite the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (locale and language) with descriptions. The description adds some semantic context by emphasizing ISO 639-1 codes and warning against using names like 'english', but this mostly reinforces schema details rather than providing new meaning. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to browse Quran translations for data retrieval, distinguishing it from list_translations which shows an interactive widget. However, it doesn't specify what resource it accesses (e.g., a database or API endpoint), making it slightly less specific than a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (only when chaining results into ayah_translation in the same turn) and when not to use it (never as a standalone answer to 'what translations are available' questions, using list_translations instead). It also names the alternative tool (list_translations) and specifies the exact scenario for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

phrase_mutashabihatWhere a phrase appears in the QuranA
Read-only
Inspect

Show phrase mutashabihat occurrences with an interactive display. Use this when: the user provides phrase text and asks where it appears; the user has a phrase_id (for example from ayah_mutashabihat) and wants all matches.

ParametersJSON Schema
NameRequiredDescriptionDefault
phrase_idNoMutashabihat phrase ID. Provide phrase_id or phrase_text, but not both.
phrase_textNoArabic phrase text to search for. Provide phrase_text or phrase_id, but not both.
same_surah_onlyNoWhen true, only include occurrences from the same surah as each matched ayah.

Output Schema

ParametersJSON Schema
NameRequiredDescription
ayahsNo
countNo
errorNo
foundYes
matchNo
sourceNo
surahsNo
phrase_idNo
occurrencesYes
phrase_textNo
errorMessageNo
closest_matchNo
not_found_reasonNo
not_found_messageNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, the description reveals this is an interactive display tool (suggesting potential UI elements or pagination) and clarifies the relationship with ayah_mutashabihat for obtaining phrase_id. It doesn't contradict annotations but provides additional operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured with two sentences: the first states the core functionality, and the second provides specific usage scenarios. Every word earns its place, with no redundancy or unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, comprehensive annotations (readOnly, non-destructive, closed-world), 100% schema coverage, and the presence of an output schema, the description provides exactly what's needed. It explains the purpose, usage context, and behavioral aspects without needing to cover technical details already documented elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents all three parameters, including their types, constraints, and the mutual exclusivity rule between phrase_id and phrase_text. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Show phrase mutashabihat occurrences with an interactive display') and distinguishes it from siblings by focusing on phrase-level mutashabihat occurrences rather than ayah-level analysis or other Quranic functions. It explicitly identifies the resource (phrase mutashabihat occurrences) and the interactive nature of the display.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with two clear scenarios: when the user provides phrase text and asks where it appears, and when the user has a phrase_id from ayah_mutashabihat and wants all matches. It names a specific sibling tool (ayah_mutashabihat) as a potential source for phrase_id, giving concrete context for when to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

play_ayahsPlay Quran audioA
Read-only
Inspect

Play Quranic ayah audio with an interactive player widget. Use this when: the user asks to play/listen to ayahs. RECITER HANDLING: If the user names a specific reciter (e.g. 'Husary', 'Minshawi', 'Al-Afasy', 'Abdul Basit'), ALWAYS call lookup_reciters first to resolve the exact reciter_id — do not guess the ID. Guessed IDs routinely point at the wrong reciter. If the user doesn't specify a reciter, omit reciter_id entirely so default_reciter_id applies. Use ayah keys in 'surah:ayah' format (for example '1:1'). In each query, reciter_id is optional and defaults to default_reciter_id if omitted. Limits: max 50 queries and max 200 total ayahs per request.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesAudio playlist queries. Each query defines an ayah range and optional reciter.
default_reciter_idNoDefault reciter ID used when a query omits reciter_id.

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
itemsYes
errorsNo
queriesNo
total_ayahsNo
errorMessageNo
unique_recitersNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: the interactive player widget detail, the critical reciter resolution workflow (calling lookup_reciters), and specific limits (max 50 queries, max 200 total ayahs). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections (purpose, usage conditions, reciter handling, format example, defaults, limits). Every sentence serves a distinct purpose: the first states the core function, the second provides usage context, and subsequent sentences offer critical implementation details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple queries with optional parameters), rich annotations, and the presence of an output schema, the description is complete. It covers purpose, usage guidelines, parameter semantics, behavioral limits, and integration with sibling tools (lookup_reciters). No gaps remain for effective agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful semantic context: it explains the 'surah:ayah' format with an example ('1:1'), clarifies reciter_id handling (defaults to default_reciter_id if omitted, with a resolution workflow), and mentions the max limits which aren't in the parameter descriptions. This provides practical usage guidance beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Play Quranic ayah audio with an interactive player widget') and distinguishes it from siblings like ayah_search, ayah_tafsir, or ayah_translation by focusing on audio playback rather than text retrieval or analysis. The verb+resource combination is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when the user asks to play/listen to ayahs') and detailed instructions on reciter handling, including when to call lookup_reciters first and when to omit reciter_id. It also distinguishes usage from text-based sibling tools by focusing on audio playback.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

prayer_timesIslamic prayer timesA
Read-only
Inspect

Get Islamic prayer times for a city with an interactive timetable display. Use this when: the user asks for salah times in a location; the user asks to calculate times with a specific prayer method (for example ISNA or MWL).

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name for prayer time calculation (for example 'Cairo').
methodNoPrayer time calculation method (for example 'ISNA', 'MWL', or 'Makkah').ISNA
countryNoOptional country name to disambiguate city lookup (for example 'Egypt').

Output Schema

ParametersJSON Schema
NameRequiredDescription
cityNo
dateNo
errorNo
methodNo
countryNo
errorCodeNo
coordinatesNo
prayerTimesNo
errorMessageNo
prayerTimesRawNo
formattedAddressNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it mentions 'interactive timetable display' which suggests richer output than basic data, and specifies calculation methods. Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, so the description doesn't need to repeat safety aspects. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with two sentences: first states the core functionality, second provides explicit usage guidelines. Every word earns its place with zero redundancy, and the 'Use this when:' format makes guidelines immediately accessible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint), 100% schema coverage, and existence of an output schema, the description provides complete contextual information. It covers purpose, usage scenarios, and behavioral context without needing to explain parameters or return values that are already documented elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already thoroughly documents all three parameters (city, method, country) with descriptions, enum values, defaults, and constraints. The description mentions 'specific prayer method (for example ISNA or MWL)' which aligns with but doesn't significantly expand upon the schema's method parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get Islamic prayer times') and resources ('for a city'), distinguishing it from sibling tools focused on Quranic content like ayah search, tafsir, or reciters. It explicitly mentions 'interactive timetable display' which adds unique functionality not implied by the name alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with 'Use this when:' followed by two concrete scenarios: when users ask for prayer times in a location, and when they ask to calculate times with specific methods like ISNA or MWL. This clearly defines when to use this tool versus alternatives (though no explicit exclusions are needed given distinct sibling tools).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ayahs_textSearch the Quran by Arabic text (data only)A
Read-only
Inspect

INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a search query — use ayah_search for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw results / no widget, OR (b) you will chain the resolved ayah keys into another tool in the same turn (play_ayahs, ayah_tafsir, or ayah_translation) without showing the raw search results to the user. When in doubt, prefer ayah_search. Do not follow ayah_search with this tool — that is duplicated work. Query is Arabic script only; diacritics and punctuation are ignored. A numeric-only query matches ayahs by that ordinal number.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query in Arabic script. Diacritics and punctuation are stripped automatically; matching is diacritic-insensitive and ranked by BM25 relevance. Numeric fragments (e.g. '255') match ayahs with that ordinal number.
max_resultsNoMaximum number of ayah results to return (1-100, default 20).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering basic safety. The description adds valuable behavioral context: 'Query is Arabic script only; diacritics and punctuation are ignored. A numeric-only query matches ayahs by that ordinal number' and emphasizes it's an 'INTERNAL/preparatory tool' not for direct user display. This goes beyond annotations but doesn't fully describe output format or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with front-loaded critical information about internal use and chaining requirements. Every sentence adds value: the first establishes usage constraints, the second explains query behavior, and the third covers numeric matching. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with specific constraints), rich annotations (readOnly, non-destructive, closed-world), and 100% schema coverage, the description provides strong contextual completeness. It clearly explains the tool's role in workflows and query behavior. The main gap is lack of output schema, but the description compensates by specifying how results should be used (chained to other tools).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some semantic context about query handling ('Arabic script only; diacritics and punctuation are ignored') and numeric matching behavior, but doesn't provide additional parameter details beyond what's in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Quran by Arabic text (data only)' in the title and elaborates that it's for finding ayah keys to chain into other tools. It explicitly distinguishes from sibling 'ayah_search' which shows an interactive widget, making the distinction clear and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'NEVER use as a standalone answer... use ayah_search for that' and 'Use this ONLY when you will chain the resolved ayah keys directly into another tool... without showing the raw search results.' It names specific alternative tools (play_ayahs, ayah_tafsir, ayah_translation) and gives clear when-to-use and when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources