Skip to main content
Glama

Server Details

Quran MCP server for translation, tafsir, mutashabihat, recitation playlists, and prayer times.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
ayah_mutashabihatA
Read-only
Inspect

Show repeated phrase metadata for one ayah. Use this when: the user asks which phrases in a specific ayah repeat elsewhere; the user needs phrase IDs and counts before calling phrase_mutashabihat.

ParametersJSON Schema
NameRequiredDescriptionDefault
ayahYesAyah number within the selected surah.
surahYesSurah number from 1 to 114.
same_surah_onlyNoWhen true, only include repeated phrase matches found in the same surah as the input ayah.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint=true), it discloses the specific output content (phrase IDs and counts) and contextualizes the tool as a metadata retrieval step in a multi-tool workflow. Does not mention rate limits or pagination, but annotations cover the safety profile adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total: first defines the action, second provides conditional usage guidelines. Every clause earns its place—no tautology, no redundancy. Well-structured with the 'Use this when:' pattern for quick parsing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage and clear workflow positioning, the description adequately supports tool selection despite no output schema. Could briefly define 'mutashabihat' concept for domain clarity, but sufficient for an agent to invoke correctly given the explicit sibling workflow reference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for surah (1-114), ayah (within selected surah), and same_surah_only filtering. Description implies parameter usage through the workflow scenario but adds no semantic detail beyond what the schema already provides, warranting the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Show' with clear resource 'repeated phrase metadata for one ayah'. Explicitly distinguishes from sibling phrase_mutashabihat by positioning this as the prerequisite step that provides phrase IDs and counts needed before calling that tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when:' clause with two specific scenarios. Clearly establishes workflow sequence by stating when users need data 'before calling phrase_mutashabihat', implicitly directing away from that sibling when prerequisite data is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ayah_tafsirA
Read-only
Inspect

Show Quran ayah tafsir. Use this when: the user asks for explanation/commentary of ayah meaning; the user asks for tafsir by language or by specific tafsir slug. Each query must include at least one of languages or tafsir_slugs. Use ayah keys in 'surah:ayah' format (for example '2:255'). Limits: max 20 queries per request and max 50 total ayah+tafsir items.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTafsir queries. Each query defines an ayah range plus tafsir slugs or languages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant operational constraints beyond readOnlyHint=true annotation: limit constraints (max 20 queries, 50 items), validation rules (each query must include languages or tafsir_slugs), and discovery dependency (use list_tafsirs for valid slugs). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Highly efficient structure: purpose statement → usage triggers → constraints → format specification → limits. Every sentence earns its place; information density is high without redundancy. Front-loaded with clear action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a read-only retrieval tool: covers query schema constraints, references discovery tool for slug validation, specifies pagination/limits, and implies return type (tafsir commentary). Absence of output schema is acceptable given clear 'Show' verb and domain context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value via concrete ayah key examples ('2:255'), ISO 639-1 language code clarification, explicit constraint that queries require languages OR tafsir_slugs, and cross-reference to list_tafsirs for valid slug discovery.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Show' + resource 'Quran ayah tafsir' with clear scope. Distinguishes from sibling ayah_translation by emphasizing 'explanation/commentary' versus literal translation, and from list_tafsirs by targeting specific ayah retrieval rather than discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this when' clause defines trigger conditions (explanation requests, language/slug-specific tafsir). However, it does not explicitly state when NOT to use this (e.g., 'do not use for literal translation, use ayah_translation instead') nor name sibling alternatives directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ayah_translationA
Read-only
Inspect

Show Quran ayah translations. Use this when: the user asks for non-Arabic translation text for ayah ranges; the user asks for specific translator output by slug. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesTranslation queries. Each query defines an ayah range plus languages or slugs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety. The description adds crucial behavioral constraints: the 'surah:ayah' format requirement, ISO 639-1 code specification, and the mutual requirement that queries must include at least one of languages or translations. Could improve by mentioning rate limits or error behavior for invalid slugs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences with zero waste: purpose statement, when-to-use conditions, parameter constraints, ayah key format, language code format, and Arabic exclusion. Information is front-loaded and structured logically without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given rich schema coverage and clear sibling differentiation, the description adequately covers input requirements and usage context. Minor gap: lacks description of return value format (e.g., whether it returns text objects, HTML, etc.), which would be helpful given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by stating the business logic constraint that 'Each query must include at least one of languages or translations', which is not enforceable in the JSON schema (neither field is individually required). It reinforces the format examples, though some repetition exists.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Show' and resource 'Quran ayah translations'. It clearly distinguishes this from siblings by contrasting with tafsir (exegesis), play_ayahs (audio), and mutashabihat (similar verses) through its explicit focus on translation text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent 'Use this when' clause enumerates two specific scenarios (non-Arabic translation requests, specific translator slug requests). It explicitly references sibling tool 'list_translations' for discovering valid slugs, and states constraints like 'Do not use ar', providing clear alternative selection guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_recitersA
Read-only
Inspect

List available Quran reciters. Use this when: the user asks what reciters are available; the user needs a valid reciter_id before calling play_ayahs.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/destructive=false safety profile. Description adds valuable workflow context that results are specifically intended as prerequisites for play_ayahs, explaining the tool's role in the broader interaction flow without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded purpose statement followed immediately by structured usage guidelines ('Use this when:'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 0-parameter read-only listing tool with strong annotations, the description is complete. It explains what is listed, when to use it, and how the results integrate with the sibling tool play_ayahs. No output schema exists, but the reciter_id reference implies the essential return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, triggering baseline score of 4 per rubric. Schema coverage is 100% (empty object), so no parameter explanation is required or expected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'List' with resource 'Quran reciters'. Explicitly distinguishes from siblings list_tafsirs and list_translations by specifying the domain (audio recitation vs. commentary/text translation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use clauses covering two scenarios: direct user queries about availability, and prerequisite workflow (obtaining reciter_id before calling play_ayahs). Clearly links to sibling tool play_ayahs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tafsirsA
Read-only
Inspect

List available Quran tafsirs with optional language filtering. Use this when: the user asks what tafsir collections exist; the user needs valid tafsir slugs before calling ayah_tafsir.

ParametersJSON Schema
NameRequiredDescriptionDefault
languagesNoOptional ISO 639-1 language code filters (for example ['en', 'ar']).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/destructive safety (readOnlyHint=true). Description adds valuable behavioral context that results include 'valid tafsir slugs' needed for the ayah_tafsir sibling, establishing the prerequisite pattern. Missing only minor details like pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two highly efficient sentences. Front-loaded with action verb. Usage guidelines separated by semicolon/period structure. Zero redundancy—no repetition of annotations or schema details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has single optional parameter (simple) and no output schema. Description adequately explains return type conceptually ('tafsir collections', 'slugs') and prerequisite relationship. Could briefly mention if results include metadata (names/descriptions) but sufficient for a discovery list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (languages parameter fully documented with ISO 639-1 reference). Description mentions 'optional language filtering' which aligns with schema but adds no new semantic detail (format/range was already in schema). Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'List' with resource 'Quran tafsirs' and scope 'optional language filtering'. It distinguishes from siblings by referencing ayah_tafsir, clarifying this is a discovery tool for that sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this when:' pattern with two clear scenarios: (1) asking what collections exist, (2) needing valid slugs for ayah_tafsir. Directly names sibling tool as prerequisite dependency.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_translationsA
Read-only
Inspect

List available Quran translations with optional language-code filtering (use ISO 639-1 codes like 'en', not names like 'english'). Use this when: the user asks what translation options exist; the user needs translation slugs before calling ayah_translation. Returned language_name values are display labels. Rows without usable slugs are filtered out.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoOptional locale/language code for response localization (for example 'en' or 'ar').
languageNoOptional ISO 639-1 language code filter (for example 'en'). Do not pass language names like 'english'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive). Description adds valuable return-value semantics ('language_name values are display labels') and data filtering behavior ('Rows without usable slugs are filtered out') that agents need to interpret results correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose with param hint, usage guidelines, return value semantics, filtering behavior. Front-loaded and logically sequenced. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a discovery tool: explains the listing purpose, the prerequisite relationship with the consumption sibling, parameter basics, and return value interpretation despite lack of output schema. No significant gaps given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed param descriptions. Description mentions 'optional language-code filtering' and ISO code format which reinforces schema content but adds minimal new semantic information given the schema already documents examples and constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'List' + resource 'Quran translations' + scope 'available/with optional filtering'. Explicitly distinguishes from sibling 'ayah_translation' by positioning this as the discovery tool for obtaining slugs needed by that tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when' clause with two specific scenarios: when user asks for options and when they need slugs before calling 'ayah_translation'. Names the sibling consumption tool directly, clarifying the prerequisite workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

phrase_mutashabihatA
Read-only
Inspect

Show phrase mutashabihat occurrences. Use this when: the user provides phrase text and asks where it appears; the user has a phrase_id (for example from ayah_mutashabihat) and wants all matches.

ParametersJSON Schema
NameRequiredDescriptionDefault
phrase_idNoMutashabihat phrase ID. Provide phrase_id or phrase_text, but not both.
phrase_textNoArabic phrase text to search for. Provide phrase_text or phrase_id, but not both.
same_surah_onlyNoWhen true, only include occurrences from the same surah as each matched ayah.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that the tool returns 'occurrences' and 'matches', providing context about the result type, but does not disclose pagination behavior, rate limits, or error conditions beyond what annotations indicate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences. First sentence states purpose immediately; second sentence provides conditional usage guidelines. No redundant content - every clause earns its place in guiding agent behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only lookup tool with 100% schema coverage and clear annotations, the description is complete. It explains the relationship to ayah_mutashabihat (critical for the mutashabihat workflow), defines the exclusive-or parameter logic, and adequately describes the tool's place in the ecosystem without requiring output schema documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions already provided (e.g., 'Provide phrase_id or phrase_text, but not both'). The description mentions 'phrase text' and 'phrase_id' in usage context but does not add semantic information beyond the schema (no format details, validation rules, or examples).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Show' with specific resource 'phrase mutashabihat occurrences'. It clearly distinguishes from siblings by explicitly referencing ayah_mutashabihat as the source for phrase_id, establishing the workflow relationship between tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when:' conditions listing two specific scenarios (user provides phrase text vs user has phrase_id). It implicitly defines when not to use (absent these inputs) and explicitly names the sibling tool ayah_mutashabihat as the prerequisite source for phrase_id.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

play_ayahsA
Read-only
Inspect

Fetch Quranic ayah audio playlist data. Use this when: the user asks to play/listen to ayahs; the user needs reciter-specific audio URLs for an ayah range. Use ayah keys in 'surah:ayah' format (for example '1:1'). In each query, reciter_id is optional and defaults to default_reciter_id if omitted. Limits: max 50 queries and max 200 total ayahs per request.

ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesAudio playlist queries. Each query defines an ayah range and optional reciter.
default_reciter_idNoDefault reciter ID used when a query omits reciter_id.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false. The description adds valuable operational constraints not in annotations: the 50 query/200 ayah limits, the surah:ayah format requirement, and the default_reciter_id fallback logic. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences structured exactly for agent consumption: purpose first, trigger conditions second, format guidance third, limits fourth. Zero redundancy—every sentence adds distinct information not available in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage and read-only annotations, the description adequately covers operational limits and hints at return value type ('audio playlist data'/'audio URLs'). Minor gap: could briefly characterize the response structure (array of URLs vs single playlist object) given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds the concrete format example ('1:1') for ayah keys and clarifies the hierarchical relationship between query-level reciter_id and top-level default_reciter_id, which helps the agent understand precedence logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Fetch Quranic ayah audio playlist data,' providing a specific verb (Fetch), resource (Quranic ayah audio), and scope (playlist data). It clearly distinguishes from text-centric siblings like ayah_tafsir and ayah_translation by emphasizing audio/playlist aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this when' clause lists two clear trigger conditions: user asks to play/listen to ayahs, or needs reciter-specific audio URLs. This provides unambiguous selection criteria against alternatives like list_reciters (which only lists metadata) or text retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

prayer_timesA
Read-only
Inspect

Get Islamic prayer times for a city. Use this when: the user asks for salah times in a location; the user asks to calculate times with a specific prayer method (for example ISNA or MWL).

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name for prayer time calculation (for example 'Cairo').
methodNoPrayer time calculation method (for example 'ISNA', 'MWL', or 'Makkah').ISNA
countryNoOptional country name to disambiguate city lookup (for example 'Egypt').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false, covering safety profile. The description adds context that calculation method affects results ('ISNA or MWL'), but does not disclose error handling, rate limits, or what time period the results cover (today vs specific date).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero redundancy. The first delivers purpose immediately; the second delivers usage conditions. Every word serves a specific function for agent selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 3-parameter lookup tool. The description successfully covers the core domain (prayer times), required location context, and calculation methods. Minor gap regarding return value structure (no output schema exists) and temporal scope of results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the method parameter's importance by referencing 'ISNA or MWL' in the usage context, but does not add syntax, format constraints, or semantic details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Get' followed by the clear resource 'Islamic prayer times' and scope 'for a city'. This distinguishes it clearly from the Quran-focused sibling tools (ayah_mutashabihat, ayah_tafsir, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive triggers with 'Use this when:' followed by two specific scenarios (salah times in a location, calculate times with specific method). Lacks explicit negative conditions ('don't use for...'), but the domain is distinct enough from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources