Tarteel MCP Server
Server Details
Quran MCP server for translation, tafsir, mutashabihat, recitation playlists, and prayer times.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 9 of 9 tools scored.
Most tools have distinct purposes clearly tied to Quranic resources (e.g., ayah_tafsir for commentary, ayah_translation for translations, play_ayahs for audio). However, ayah_mutashabihat and phrase_mutashabihat both handle phrase repetition analysis and could be confused without careful reading of their descriptions, as they serve related but separate steps in a workflow.
Tool names follow a consistent snake_case pattern with clear verb_noun structures (e.g., list_reciters, ayah_tafsir, play_ayahs). The naming is predictable and readable throughout, making it easy for agents to understand the action and resource involved.
With 9 tools, the server is well-scoped for its Quranic domain, covering key areas like text retrieval (tafsir, translation), audio playback, phrase analysis, and prayer times. Each tool serves a specific function without redundancy, and the count is manageable for agents to navigate effectively.
The tool set provides comprehensive coverage for Quranic study, including text, audio, and contextual data, with clear workflows (e.g., list tools for discovery followed by action tools). A minor gap is the lack of tools for broader Quranic search or chapter-level operations, but core functionalities are well-covered, allowing agents to handle most user queries efficiently.
Available Tools
16 toolsayah_mutashabihatRepeated phrases in an ayahARead-onlyInspect
Show repeated phrase metadata for one ayah with an interactive display. Use this when: the user asks which phrases in a specific ayah repeat elsewhere; the user needs phrase IDs and counts before calling phrase_mutashabihat.
| Name | Required | Description | Default |
|---|---|---|---|
| ayah | Yes | Ayah number within the selected surah. | |
| surah | Yes | Surah number from 1 to 114. | |
| same_surah_only | No | When true, only include repeated phrase matches found in the same surah as the input ayah. |
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| phrases | Yes | |
| ayah_key | No | |
| surah_id | No | |
| ayah_words | No | |
| phrase_ids | Yes | |
| surah_name | No | |
| ayah_number | No | |
| errorMessage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint=true), it discloses the specific output content (phrase IDs and counts) and contextualizes the tool as a metadata retrieval step in a multi-tool workflow. Does not mention rate limits or pagination, but annotations cover the safety profile adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total: first defines the action, second provides conditional usage guidelines. Every clause earns its place—no tautology, no redundancy. Well-structured with the 'Use this when:' pattern for quick parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and clear workflow positioning, the description adequately supports tool selection despite no output schema. Could briefly define 'mutashabihat' concept for domain clarity, but sufficient for an agent to invoke correctly given the explicit sibling workflow reference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for surah (1-114), ayah (within selected surah), and same_surah_only filtering. Description implies parameter usage through the workflow scenario but adds no semantic detail beyond what the schema already provides, warranting the baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Show' with clear resource 'repeated phrase metadata for one ayah'. Explicitly distinguishes from sibling phrase_mutashabihat by positioning this as the prerequisite step that provides phrase IDs and counts needed before calling that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' clause with two specific scenarios. Clearly establishes workflow sequence by stating when users need data 'before calling phrase_mutashabihat', implicitly directing away from that sibling when prerequisite data is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ayah_searchSearch the Quran by Arabic textARead-onlyInspect
DEFAULT tool for user-facing Quran search. Use this for ANY user-facing search — 'find ayahs that contain X', 'where does X appear in the Quran', 'search the Quran for X', or similar. This is the FINAL tool call for these requests; do not follow it with search_ayahs_text. Shows matches in an interactive widget the user can browse. Query is Arabic script only (diacritics and punctuation are ignored). A numeric-only query matches ayahs by that ordinal number (for example '255' returns ayahs ending in ':255'). ONLY skip this widget and use search_ayahs_text when EITHER (a) the user explicitly asks for plain text / raw results, OR (b) the results will be fed into another tool in the same turn without being shown. When in doubt, use this widget.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query in Arabic script. Diacritics and punctuation are stripped automatically; matching is diacritic-insensitive and ranked by BM25 relevance. Numeric fragments (e.g. '255') match ayahs with that ordinal number. | |
| max_results | No | Maximum number of ayah results to return (1-100, default 20). |
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| query | Yes | |
| results | Yes | |
| errorMessage | No | |
| total_results | Yes | |
| strategy_summary | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds valuable behavioral context beyond annotations: it specifies that 'Query is Arabic script only (diacritics and punctuation are ignored)' and explains numeric query handling ('A numeric-only query matches ayahs by that ordinal number'), which are not covered by annotations. However, it does not mention rate limits or authentication needs, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: the first states the action, the second provides usage guidelines, the third explains query constraints, and the fourth clarifies numeric queries and sibling tool preference. There is no wasted text, and the structure is logical and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with ranked results), rich annotations (covering read-only, non-destructive, closed-world), and high schema coverage (100%), the description is mostly complete. It explains usage scenarios, query constraints, and sibling tool relationships. However, without an output schema, it does not describe the return format (e.g., structure of the interactive list), which is a minor gap in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (query and max_results). The description adds some semantic context by reiterating that the query is 'Arabic script only' and explaining numeric query behavior, but this largely overlaps with the schema's description. It does not provide additional meaning beyond what the schema offers, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search Quran ayahs by Arabic text and show matches in an interactive list') and distinguishes it from siblings by explicitly mentioning 'prefer search_ayahs_text instead' for intermediate lookups. It identifies the resource (Quran ayahs) and the method (search by Arabic text).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios ('when: the user asks where a phrase appears in the Quran; the user wants to find an ayah from a remembered fragment; the user wants to browse a ranked list of matches') and explicitly names an alternative tool ('prefer search_ayahs_text instead') for specific contexts (intermediate lookups before calling other tools). This gives clear guidance on when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ayah_tafsirQuran tafsirARead-onlyInspect
DEFAULT tool for user-facing tafsir display. Use this for ANY user-facing request to show/see tafsir commentary on a Quran ayah — including 'show me the tafsir of…', 'what does Ibn Kathir say about…', 'explain this ayah'. This is the FINAL tool call for these requests; do not follow it with get_tafsir_text. ONLY skip this widget and use get_tafsir_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific tafsir (e.g. 'Ibn Kathir', 'Mokhtasar', 'Maarif-ul-Quran', 'Tazkirul Quran'), ALWAYS call lookup_tafsirs first to resolve the exact slug — do not guess the slug from the name. Guessed slugs fail validation. If the user only specifies a language ('English tafsir', 'Arabic tafsir'), you may pass 'languages' without a slug. Each query must include at least one of languages or tafsir_slugs. Use ayah keys in 'surah:ayah' format (for example '2:255'). Limits: max 20 queries per request and max 50 total ayah+tafsir items.
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Tafsir queries. Each query defines an ayah range plus tafsir slugs or languages. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ayahs | Yes | |
| error | No | |
| total_ayahs | Yes | |
| errorMessage | No | |
| tafsir_languages | Yes | |
| tafsir_slugs_used | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds significant operational constraints beyond readOnlyHint=true annotation: limit constraints (max 20 queries, 50 items), validation rules (each query must include languages or tafsir_slugs), and discovery dependency (use list_tafsirs for valid slugs). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Highly efficient structure: purpose statement → usage triggers → constraints → format specification → limits. Every sentence earns its place; information density is high without redundancy. Front-loaded with clear action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a read-only retrieval tool: covers query schema constraints, references discovery tool for slug validation, specifies pagination/limits, and implies return type (tafsir commentary). Absence of output schema is acceptable given clear 'Show' verb and domain context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value via concrete ayah key examples ('2:255'), ISO 639-1 language code clarification, explicit constraint that queries require languages OR tafsir_slugs, and cross-reference to list_tafsirs for valid slug discovery.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Show' + resource 'Quran ayah tafsir' with clear scope. Distinguishes from sibling ayah_translation by emphasizing 'explanation/commentary' versus literal translation, and from list_tafsirs by targeting specific ayah retrieval rather than discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this when' clause defines trigger conditions (explanation requests, language/slug-specific tafsir). However, it does not explicitly state when NOT to use this (e.g., 'do not use for literal translation, use ayah_translation instead') nor name sibling alternatives directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ayah_translationQuran translationsARead-onlyInspect
DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Translation queries. Each query defines an ayah range plus languages or slugs. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ayahs | Yes | |
| error | No | |
| total_ayahs | Yes | |
| errorMessage | No | |
| languages_used | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only safety. The description adds crucial behavioral constraints: the 'surah:ayah' format requirement, ISO 639-1 code specification, and the mutual requirement that queries must include at least one of languages or translations. Could improve by mentioning rate limits or error behavior for invalid slugs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences with zero waste: purpose statement, when-to-use conditions, parameter constraints, ayah key format, language code format, and Arabic exclusion. Information is front-loaded and structured logically without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given rich schema coverage and clear sibling differentiation, the description adequately covers input requirements and usage context. Minor gap: lacks description of return value format (e.g., whether it returns text objects, HTML, etc.), which would be helpful given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by stating the business logic constraint that 'Each query must include at least one of languages or translations', which is not enforceable in the JSON schema (neither field is individually required). It reinforces the format examples, though some repetition exists.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Show' and resource 'Quran ayah translations'. It clearly distinguishes this from siblings by contrasting with tafsir (exegesis), play_ayahs (audio), and mutashabihat (similar verses) through its explicit focus on translation text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent 'Use this when' clause enumerates two specific scenarios (non-Arabic translation requests, specific translator slug requests). It explicitly references sibling tool 'list_translations' for discovering valid slugs, and states constraints like 'Do not use ar', providing clear alternative selection guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tafsir_textQuran tafsir (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to any 'show me / explain with tafsir…' request — use ayah_tafsir for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw text / text-only output (e.g. 'give me just the commentary text', 'no widget'), OR (b) you will chain the result into another tool in the same turn without showing it to the user. When in doubt, prefer ayah_tafsir. Do not follow ayah_tafsir with this tool — that is duplicated work. Each query must include at least one of languages or tafsir_slugs. Use ayah keys in 'surah:ayah' format (for example '2:255'). Limits: max 20 queries per request and max 50 total ayah+tafsir items.
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Tafsir queries. Each query defines an ayah range plus tafsir slugs or languages. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable context beyond annotations: it specifies the output format ('plain text'), clarifies use cases ('inline quoting, summarizing, or analysis'), and documents rate limits ('max 20 queries per request and max 50 total ayah+tafsir items'). It doesn't describe pagination or error behavior, but with annotations covering core traits, this is strong additional context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states purpose and use cases, second provides sibling differentiation, third covers requirements and limits. Every sentence adds essential information with zero redundancy or filler, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering safety (readOnly, non-destructive) and scope (closed-world), plus 100% schema coverage, the description provides strong contextual completeness. It explains purpose, usage guidelines, output format, and limits. The main gap is lack of output schema, but the description compensates by specifying 'plain text' format. For a read-only query tool, this is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics: it mentions the 'surah:ayah' format for ayah keys and reinforces that queries must include languages or tafsir_slugs. However, it doesn't provide additional syntax, format details, or examples beyond what the schema already covers, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'fetch' and resource 'tafsir commentary as plain text', specifying it's for 'inline quoting, summarizing, or analysis'. It explicitly distinguishes from sibling 'ayah_tafsir' by stating 'for full interactive tafsir display, use ayah_tafsir instead', making the distinction unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when you need to reference or summarize tafsir content directly in your response') and when to use an alternative ('for full interactive tafsir display, use ayah_tafsir instead'). It also includes prerequisites ('Each query must include at least one of languages or tafsir_slugs') and limits ('max 20 queries per request and max 50 total ayah+tafsir items').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_translation_textQuran translations (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to any 'show me / what's the translation of…' request — use ayah_translation for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw text / text-only output (e.g. 'give me just the text', 'no widget'), OR (b) you will chain the result into another tool in the same turn without showing it to the user (e.g. summarize then call play_ayahs). When in doubt, prefer ayah_translation. Do not follow ayah_translation with this tool — that is duplicated work. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Translation queries. Each query defines an ayah range plus languages or slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering basic safety and scope. The description adds valuable behavioral context beyond annotations: format requirements ('surah:ayah' format, ISO 639-1 codes), language restrictions ('Do not use 'ar'; Arabic translation is unsupported'), and the tool's purpose for 'inline quoting, comparison, or analysis'. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with four sentences that each serve a distinct purpose: stating the tool's purpose, providing usage guidelines, specifying parameter requirements, and giving format instructions. There is no wasted language, and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter with nested structure), rich schema descriptions (100% coverage), and helpful annotations, the description provides strong contextual completeness. It explains the tool's purpose, usage guidelines, and key behavioral constraints. The main gap is the lack of output schema, but the description compensates by indicating the output is 'plain data for inline quoting'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter 'queries' and its nested properties. The description adds some semantic context about parameter usage ('Each query must include at least one of languages or translations') and format examples, but doesn't provide significant additional meaning beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('fetch translation text as plain data') and resource ('translation excerpts'), distinguishing it from the sibling 'ayah_translation' tool which is for 'full interactive translation display'. It provides a precise verb+resource combination with explicit sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('when you need to incorporate translation excerpts directly into your response text') and when to use an alternative ('For full interactive translation display, use ayah_translation instead'). It also provides usage constraints ('Each query must include at least one of languages or translations').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_recitersBrowse Quran recitersARead-onlyInspect
DEFAULT tool for user-facing reciter-listing questions. Use this for ANY user-facing query like 'what reciters are available', 'who can recite for me', 'list Quran reciters'. This is the FINAL tool call for these requests; do not follow it with lookup_reciters. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_reciters instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into another tool (e.g. play_ayahs) in the same turn without showing the list. When in doubt, use this widget.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| reciters | Yes | |
| totalCount | Yes | |
| errorMessage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only/destructive=false safety profile. Description adds valuable workflow context that results are specifically intended as prerequisites for play_ayahs, explaining the tool's role in the broader interaction flow without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded purpose statement followed immediately by structured usage guidelines ('Use this when:'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 0-parameter read-only listing tool with strong annotations, the description is complete. It explains what is listed, when to use it, and how the results integrate with the sibling tool play_ayahs. No output schema exists, but the reciter_id reference implies the essential return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, triggering baseline score of 4 per rubric. Schema coverage is 100% (empty object), so no parameter explanation is required or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'List' with resource 'Quran reciters'. Explicitly distinguishes from siblings list_tafsirs and list_translations by specifying the domain (audio recitation vs. commentary/text translation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use clauses covering two scenarios: direct user queries about availability, and prerequisite workflow (obtaining reciter_id before calling play_ayahs). Clearly links to sibling tool play_ayahs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tafsirsBrowse tafsir collectionsARead-onlyInspect
DEFAULT tool for user-facing tafsir-listing questions. Use this for ANY user-facing query like 'what tafsirs are supported', 'list English tafsirs', 'which tafsir collections do you have'. This is the FINAL tool call for these requests; do not follow it with lookup_tafsirs. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_tafsirs instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into ayah_tafsir in the same turn without showing the list. When in doubt, use this widget.
| Name | Required | Description | Default |
|---|---|---|---|
| languages | No | Optional ISO 639-1 language code filters (for example ['en', 'ar']). |
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| tafsirs | Yes | |
| totalCount | Yes | |
| errorMessage | No | |
| languagesFilter | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/destructive safety (readOnlyHint=true). Description adds valuable behavioral context that results include 'valid tafsir slugs' needed for the ayah_tafsir sibling, establishing the prerequisite pattern. Missing only minor details like pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two highly efficient sentences. Front-loaded with action verb. Usage guidelines separated by semicolon/period structure. Zero redundancy—no repetition of annotations or schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has single optional parameter (simple) and no output schema. Description adequately explains return type conceptually ('tafsir collections', 'slugs') and prerequisite relationship. Could briefly mention if results include metadata (names/descriptions) but sufficient for a discovery list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (languages parameter fully documented with ISO 639-1 reference). Description mentions 'optional language filtering' which aligns with schema but adds no new semantic detail (format/range was already in schema). Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' with resource 'Quran tafsirs' and scope 'optional language filtering'. It distinguishes from siblings by referencing ayah_tafsir, clarifying this is a discovery tool for that sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this when:' pattern with two clear scenarios: (1) asking what collections exist, (2) needing valid slugs for ayah_tafsir. Directly names sibling tool as prerequisite dependency.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_translationsBrowse Quran translationsARead-onlyInspect
DEFAULT tool for user-facing translation-listing questions. Use this for ANY user-facing query like 'what English translations are available', 'list French translations', 'which translators can I choose from'. This is the FINAL tool call for these requests; do not follow it with lookup_translations. Shows the catalog in an interactive widget the user can browse. Use ISO 639-1 codes like 'en', not names like 'english'. ONLY use lookup_translations instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into ayah_translation in the same turn without showing the list. When in doubt, use this widget. Returned language_name values are display labels. Rows without usable slugs are filtered out.
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Optional locale/language code for response localization (for example 'en' or 'ar'). | |
| language | No | Optional ISO 639-1 language code filter (for example 'en'). Do not pass language names like 'english'. |
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| totalCount | Yes | |
| errorMessage | No | |
| translations | Yes | |
| languageFilter | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive). Description adds valuable return-value semantics ('language_name values are display labels') and data filtering behavior ('Rows without usable slugs are filtered out') that agents need to interpret results correctly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose with param hint, usage guidelines, return value semantics, filtering behavior. Front-loaded and logically sequenced. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a discovery tool: explains the listing purpose, the prerequisite relationship with the consumption sibling, parameter basics, and return value interpretation despite lack of output schema. No significant gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed param descriptions. Description mentions 'optional language-code filtering' and ISO code format which reinforces schema content but adds minimal new semantic information given the schema already documents examples and constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' + resource 'Quran translations' + scope 'available/with optional filtering'. Explicitly distinguishes from sibling 'ayah_translation' by positioning this as the discovery tool for obtaining slugs needed by that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when' clause with two specific scenarios: when user asks for options and when they need slugs before calling 'ayah_translation'. Names the sibling consumption tool directly, clarifying the prerequisite workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_recitersBrowse Quran reciters (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what reciters are available' question — use list_reciters for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into play_ayahs in the same turn without showing the raw list (e.g. user asks to play audio by a named reciter; call this to resolve reciter_id, then call play_ayahs). When in doubt, prefer list_reciters.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, indicating a safe, read-only operation with closed-world data. The description adds valuable context beyond annotations by explaining that it returns 'plain data' for preparatory steps and factual queries, though it doesn't detail rate limits or authentication needs. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by two specific usage scenarios. Every sentence earns its place by providing clear guidance without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage guidelines, and behavioral context effectively. However, it doesn't explicitly mention the return format or data structure, which could be helpful since there's no output schema, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% description coverage, so the schema fully documents the lack of inputs. The description adds semantic context by implying no parameters are needed for this lookup operation, which aligns with the schema. Baseline is 4 for 0 parameters, as the description appropriately doesn't need to compensate for missing param info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up available reciter IDs and names as plain data.' It specifies the verb ('look up'), resource ('reciter IDs and names'), and format ('plain data'), and distinguishes it from siblings like 'list_reciters' by emphasizing data retrieval for preparatory steps rather than interactive browsing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios: 'Use this as a preparatory step when the user wants to play audio but hasn't specified a reciter' and 'Also use when answering factual questions about available reciters where no interactive browsing is needed.' It distinguishes when to use this tool versus alternatives by specifying its role in resolving reciter_id before calling 'play_ayahs' and contrasting with non-interactive use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_tafsirsBrowse tafsir collections (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what tafsirs are supported' question — use list_tafsirs for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into ayah_tafsir in the same turn without showing the raw list (e.g. resolve a named tafsir to its slug, then call ayah_tafsir). When in doubt, prefer list_tafsirs.
| Name | Required | Description | Default |
|---|---|---|---|
| languages | No | Optional ISO 639-1 language code filters (for example ['en', 'ar']). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering the basic safety profile. The description adds valuable context about the tool's role in a workflow (preparatory step for slug discovery) and output format ('plain data'), which goes beyond what annotations provide. However, it doesn't mention potential limitations like rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that each serve a clear purpose: the first states the core functionality and primary use case, the second provides additional context for alternative usage. There's no wasted language, and the most important information (preparatory step for slug discovery) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (lookup operation with filtering), rich annotations covering safety aspects, and 100% schema coverage, the description provides good contextual completeness. It explains the tool's role in a workflow and when to use it, though without an output schema, it could benefit from more detail about the return format beyond 'plain data'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'languages' well-documented in the schema as optional ISO 639-1 language code filters. The description doesn't add any additional parameter information beyond what's in the schema, so it meets the baseline of 3 for adequate coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('look up available tafsir slugs and metadata') and distinguishes it from siblings by explaining it's a preparatory step for discovering valid slugs before calling ayah_tafsir. It explicitly mentions this is for factual questions about available tafsir collections, making the purpose distinct from other list/lookup tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('as a preparatory step when the user asks for tafsir but you need to discover valid slugs first') and what to do next ('call this, then call ayah_tafsir with the resolved slugs'). It also specifies alternative use cases ('for factual questions about available tafsir collections'), giving clear context for when this tool is appropriate versus other options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_translationsBrowse Quran translations (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a 'what translations are available' question — use list_translations for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw data / no widget, OR (b) you will chain the result into ayah_translation in the same turn without showing the raw list (e.g. resolve a named translator to the correct slug, then call ayah_translation). When in doubt, prefer list_translations. Use ISO 639-1 codes like 'en', not names like 'english'.
| Name | Required | Description | Default |
|---|---|---|---|
| locale | No | Optional locale/language code for response localization (for example 'en' or 'ar'). | |
| language | No | Optional ISO 639-1 language code filter (for example 'en'). Do not pass language names like 'english'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, non-destructive, and closed-world behavior, which the description aligns with by describing a lookup operation. The description adds valuable context beyond annotations: it clarifies the tool's role in a workflow (preparatory step), specifies output format ('plain data'), and provides language code usage rules (ISO 639-1 codes). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: three sentences that efficiently convey purpose, usage guidelines, and parameter rules. Each sentence adds clear value without redundancy, and it's front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 optional parameters), rich annotations (read-only, non-destructive), and lack of output schema, the description is mostly complete. It covers purpose, usage, and parameter rules, but could benefit from clarifying the relationship between 'locale' and 'language' parameters or expected output structure, though annotations help mitigate this gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds some semantic context by reinforcing the language code format rule ('Use ISO 639-1 codes like 'en', not names like 'english''), which applies to the 'language' parameter, but doesn't provide additional meaning beyond what's in the schema descriptions. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Look up available translation slugs as plain data.' It specifies the verb ('look up'), resource ('translation slugs'), and output format ('plain data'), distinguishing it from siblings like 'list_translations' or 'get_translation_text' by emphasizing discovery and preparatory use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'as a preparatory step when you need to discover valid translation slugs before calling ayah_translation' and 'for factual questions about available translations.' It also distinguishes from alternatives by specifying its role in the workflow, though it doesn't explicitly name when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
phrase_mutashabihatWhere a phrase appears in the QuranARead-onlyInspect
Show phrase mutashabihat occurrences with an interactive display. Use this when: the user provides phrase text and asks where it appears; the user has a phrase_id (for example from ayah_mutashabihat) and wants all matches.
| Name | Required | Description | Default |
|---|---|---|---|
| phrase_id | No | Mutashabihat phrase ID. Provide phrase_id or phrase_text, but not both. | |
| phrase_text | No | Arabic phrase text to search for. Provide phrase_text or phrase_id, but not both. | |
| same_surah_only | No | When true, only include occurrences from the same surah as each matched ayah. |
Output Schema
| Name | Required | Description |
|---|---|---|
| ayahs | No | |
| count | No | |
| error | No | |
| found | Yes | |
| match | No | |
| source | No | |
| surahs | No | |
| phrase_id | No | |
| occurrences | Yes | |
| phrase_text | No | |
| errorMessage | No | |
| closest_match | No | |
| not_found_reason | No | |
| not_found_message | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that the tool returns 'occurrences' and 'matches', providing context about the result type, but does not disclose pagination behavior, rate limits, or error conditions beyond what annotations indicate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences. First sentence states purpose immediately; second sentence provides conditional usage guidelines. No redundant content - every clause earns its place in guiding agent behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only lookup tool with 100% schema coverage and clear annotations, the description is complete. It explains the relationship to ayah_mutashabihat (critical for the mutashabihat workflow), defines the exclusive-or parameter logic, and adequately describes the tool's place in the ecosystem without requiring output schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions already provided (e.g., 'Provide phrase_id or phrase_text, but not both'). The description mentions 'phrase text' and 'phrase_id' in usage context but does not add semantic information beyond the schema (no format details, validation rules, or examples).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Show' with specific resource 'phrase mutashabihat occurrences'. It clearly distinguishes from siblings by explicitly referencing ayah_mutashabihat as the source for phrase_id, establishing the workflow relationship between tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' conditions listing two specific scenarios (user provides phrase text vs user has phrase_id). It implicitly defines when not to use (absent these inputs) and explicitly names the sibling tool ayah_mutashabihat as the prerequisite source for phrase_id.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
play_ayahsPlay Quran audioARead-onlyInspect
Play Quranic ayah audio with an interactive player widget. Use this when: the user asks to play/listen to ayahs. RECITER HANDLING: If the user names a specific reciter (e.g. 'Husary', 'Minshawi', 'Al-Afasy', 'Abdul Basit'), ALWAYS call lookup_reciters first to resolve the exact reciter_id — do not guess the ID. Guessed IDs routinely point at the wrong reciter. If the user doesn't specify a reciter, omit reciter_id entirely so default_reciter_id applies. Use ayah keys in 'surah:ayah' format (for example '1:1'). In each query, reciter_id is optional and defaults to default_reciter_id if omitted. Limits: max 50 queries and max 200 total ayahs per request.
| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Audio playlist queries. Each query defines an ayah range and optional reciter. | |
| default_reciter_id | No | Default reciter ID used when a query omits reciter_id. |
Output Schema
| Name | Required | Description |
|---|---|---|
| error | No | |
| items | Yes | |
| errors | No | |
| queries | No | |
| total_ayahs | No | |
| errorMessage | No | |
| unique_reciters | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. The description adds valuable operational constraints not in annotations: the 50 query/200 ayah limits, the surah:ayah format requirement, and the default_reciter_id fallback logic. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences structured exactly for agent consumption: purpose first, trigger conditions second, format guidance third, limits fourth. Zero redundancy—every sentence adds distinct information not available in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage and read-only annotations, the description adequately covers operational limits and hints at return value type ('audio playlist data'/'audio URLs'). Minor gap: could briefly characterize the response structure (array of URLs vs single playlist object) given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds the concrete format example ('1:1') for ayah keys and clarifies the hierarchical relationship between query-level reciter_id and top-level default_reciter_id, which helps the agent understand precedence logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Fetch Quranic ayah audio playlist data,' providing a specific verb (Fetch), resource (Quranic ayah audio), and scope (playlist data). It clearly distinguishes from text-centric siblings like ayah_tafsir and ayah_translation by emphasizing audio/playlist aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this when' clause lists two clear trigger conditions: user asks to play/listen to ayahs, or needs reciter-specific audio URLs. This provides unambiguous selection criteria against alternatives like list_reciters (which only lists metadata) or text retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prayer_timesIslamic prayer timesARead-onlyInspect
Get Islamic prayer times for a city with an interactive timetable display. Use this when: the user asks for salah times in a location; the user asks to calculate times with a specific prayer method (for example ISNA or MWL).
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name for prayer time calculation (for example 'Cairo'). | |
| method | No | Prayer time calculation method (for example 'ISNA', 'MWL', or 'Makkah'). | ISNA |
| country | No | Optional country name to disambiguate city lookup (for example 'Egypt'). |
Output Schema
| Name | Required | Description |
|---|---|---|
| city | No | |
| date | No | |
| error | No | |
| method | No | |
| country | No | |
| errorCode | No | |
| coordinates | No | |
| prayerTimes | No | |
| errorMessage | No | |
| prayerTimesRaw | No | |
| formattedAddress | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, covering safety profile. The description adds context that calculation method affects results ('ISNA or MWL'), but does not disclose error handling, rate limits, or what time period the results cover (today vs specific date).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. The first delivers purpose immediately; the second delivers usage conditions. Every word serves a specific function for agent selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 3-parameter lookup tool. The description successfully covers the core domain (prayer times), required location context, and calculation methods. Minor gap regarding return value structure (no output schema exists) and temporal scope of results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the method parameter's importance by referencing 'ISNA or MWL' in the usage context, but does not add syntax, format constraints, or semantic details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Get' followed by the clear resource 'Islamic prayer times' and scope 'for a city'. This distinguishes it clearly from the Quran-focused sibling tools (ayah_mutashabihat, ayah_tafsir, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive triggers with 'Use this when:' followed by two specific scenarios (salah times in a location, calculate times with specific method). Lacks explicit negative conditions ('don't use for...'), but the domain is distinct enough from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_ayahs_textSearch the Quran by Arabic text (data only)ARead-onlyInspect
INTERNAL/preparatory tool — text-only, no widget rendered. NEVER use as the user-facing answer to a search query — use ayah_search for that (the default interactive widget). Use this ONLY when EITHER (a) the user explicitly asks for plain text / raw results / no widget, OR (b) you will chain the resolved ayah keys into another tool in the same turn (play_ayahs, ayah_tafsir, or ayah_translation) without showing the raw search results to the user. When in doubt, prefer ayah_search. Do not follow ayah_search with this tool — that is duplicated work. Query is Arabic script only; diacritics and punctuation are ignored. A numeric-only query matches ayahs by that ordinal number.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query in Arabic script. Diacritics and punctuation are stripped automatically; matching is diacritic-insensitive and ranked by BM25 relevance. Numeric fragments (e.g. '255') match ayahs with that ordinal number. | |
| max_results | No | Maximum number of ayah results to return (1-100, default 20). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context: 'Query is Arabic script only; diacritics and punctuation are ignored. A numeric-only query matches ayahs by that ordinal number,' which clarifies input handling and special cases. However, it doesn't mention rate limits or authentication needs, though annotations suffice for basic safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidelines and constraints in clear, efficient sentences. Each sentence adds value without redundancy, such as distinguishing from siblings and explaining query handling, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with constraints), annotations cover safety, and the schema fully describes parameters. The description adds necessary context like usage flow and input rules. However, there is no output schema, and the description doesn't detail the return format (e.g., JSON structure), leaving a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters. The description adds some context: 'Query is Arabic script only; diacritics and punctuation are ignored. A numeric-only query matches ayahs by that ordinal number,' which partially overlaps with the schema's description of 'query' but reinforces it. No additional parameter insights are provided beyond the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search Quran ayahs by Arabic text; returns plain JSON (no widget).' It specifies the verb (search), resource (Quran ayahs), and key constraint (Arabic text only). It also distinguishes from sibling 'ayah_search' by noting this returns plain JSON versus visual presentation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this as a preparatory step when you need to find matching ayah keys before calling play_ayahs, ayah_tafsir, or ayah_translation' and 'Also use for inline quoting when no interactive display is needed.' It also specifies when not to use it: 'For final visual presentation of search results, use ayah_search instead,' clearly differentiating from a sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!