Skip to main content
Glama

Sukkon Mutual Fund MCP Server

Server Details

Connect Sukoon MCP to Claude, Cursor, or any MCP client. Research 14,000+ Indian mutual funds with 20+ years of daily NAV history, benchmarks, and rich performance metrics like CAGR, alpha, beta, Sharpe ratio, drawdown, volatility, and rolling returns — all in plain English. Compare funds, analyze risk, backtest strategies, and uncover insights for free, forever. https://sukoon.money

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 17 of 17 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, from comparing funds and screening to retrieving specific metrics or listings. No two tools overlap in functionality, ensuring agents can easily select the right one.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., compare_funds, get_metrics, list_categories). The naming is predictable and uniform across the entire set.

Tool Count5/5

Seventeen tools is well-scoped for a mutual fund data server, covering search, screening, comparison, and retrieval of various metrics without being excessive or insufficient.

Completeness4/5

The tool set covers most common mutual fund operations: search, listing, comparison, NAV, returns, risk metrics, holdings, and benchmarks. Minor gaps like sector allocation or fund documents exist, but core workflows are well-supported.

Available Tools

21 tools
compare_fundsBInspect

Compare 2–10 funds side-by-side on all key metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codesYesArray of AMFI scheme codes to compare (2–10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavior. It mentions 'all key metrics' but does not specify which metrics, output format, or edge cases like invalid fund codes. Lacks behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence with no filler, front-loaded with the core action and scope. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimal description with 1 param and no output schema. It conveys the purpose but omits details on output, metrics, and caveats. Given the complexity, more context would aid selection among many siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, and the description's '2-10 funds' echoes the schema's minItems/maxItems. The description adds no new semantic meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Compare'), the resource ('funds'), and the scope ('2-10 funds side-by-side on all key metrics'). It distinguishes from siblings like get_fund (singular) or get_metrics (one fund).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like screen_funds or search_funds. Does not mention prerequisites or scenarios where comparison is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_funds_holdingAInspect

Find all funds that hold a specific stock (by name partial match or ISIN).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax funds to return (default: 20)
identifierYesStock name (partial match) or ISIN
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It discloses the search capability but omits behaviors like pagination, error handling, or response format. Minimal details beyond the core function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Information is efficiently packed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-param tool, the description covers purpose and identifier method. However, missing details on output format or what the returned list contains (e.g., fund names, IDs). Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented. The description adds no new meaning beyond the schema, merely restating 'name partial match or ISIN' already present. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Find' and the resource 'all funds that hold a specific stock'. It specifies matching by name partial match or ISIN, distinguishing it from siblings like 'get_holdings' (which shows holdings of a fund) and 'search_funds' (which searches funds by criteria).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding funds holding a stock, but lacks explicit guidance on when not to use or comparisons to alternatives like 'get_holdings' or 'search_funds'. No exclusions or caveats are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_benchmarkAInspect

Get TRI (Total Return Index) series for a benchmark index. Available: NIFTY 50, NIFTY 100, NIFTY 500, NIFTY MIDCAP 150, NIFTY SMALLCAP 250, and others.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_dateNoEnd date in YYYY-MM-DD format (optional)
from_dateNoStart date in YYYY-MM-DD format (optional)
index_nameYesBenchmark index name (e.g. "NIFTY 50", "NIFTY MIDCAP 150")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It uses 'Get' implying a read operation but does not confirm idempotency, authentication needs, rate limits, or return format requirements. This leaves critical behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences, front-loaded with the core purpose. Every sentence serves a purpose—first explains the action and data, second provides concrete examples. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is adequate for a simple data retrieval tool but lacks details about the output format (e.g., what fields are returned, pagination). Given the absence of an output schema, it should at least hint at the response structure to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all parameters have descriptions). The description adds value by listing example index names and indicating optionality of dates, but it does not provide additional semantics beyond the schema (e.g., no details on date range constraints or default behavior).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves TRI series for a benchmark index, with specific examples like NIFTY 50 and NIFTY 100. The verb 'Get' combined with the resource 'TRI series' makes the purpose unambiguous and distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool over alternatives is provided. The description implicitly suggests usage by listing available benchmarks but does not mention exclusions, prerequisites, or comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_category_statsAInspect

Get median and average for all metrics across all funds in a SEBI category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYesSEBI fund category name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates a read operation (getting stats) but omits details about permissions, rate limits, response format, or whether it includes all metrics or only median/average. While not contradictory, it provides only minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence of 16 words with no redundancy. Every word contributes purpose: 'Get', 'median and average', 'all metrics', 'across all funds in a SEBI category'. It earns its place efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not specify the return structure (e.g., list of metric names with values). It mentions 'all metrics' but does not define which metrics. For a simple one-parameter tool, this is adequate but lacks completeness about what the agent can expect in response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for the 'category' parameter. The description adds extra meaning by specifying 'across all funds in a SEBI category,' reinforcing the parameter's role. This goes beyond the schema's terse 'SEBI fund category name' and helps the agent understand the scope.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the specific resource: 'median and average for all metrics across all funds in a SEBI category.' This distinguishes it from siblings like list_categories (lists categories) and list_funds_in_category (lists funds), as it focuses on aggregate statistics per category.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for aggregate category statistics but lacks explicit when-to-use or when-not-to-use guidance. It does not mention alternatives like get_metrics for individual funds or compare_funds for cross-fund comparison, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_debt_quantsAInspect

Get debt quantitative metrics for a fund: Macaulay Duration, Modified Duration, average maturity (years), and annualised YTM (%). Available for debt and hybrid funds from AMFI portfolio disclosures.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesAMFI scheme code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the data source (AMFI portfolio disclosures) and lists returned metrics, but does not disclose behavioral traits such as data freshness, authentication requirements, or rate limits. The description is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence (around 20 words) that front-loads the action and outputs. Every word adds value, with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one required parameter, no output schema, and no annotations, the description is largely complete: it specifies the metrics and applicability. However, the lack of output format details or any behavioral aspects slightly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter 'scheme_code' is described as 'AMFI scheme code' in the schema. The description does not add additional meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a specific verb 'Get' and resource 'debt quantitative metrics for a fund', listing exact metrics (Macaulay Duration, Modified Duration, average maturity, YTM). It distinguishes from siblings like get_metrics (likely equity) and get_sif_metrics by stating it is for debt and hybrid funds from AMFI disclosures.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'Available for debt and hybrid funds from AMFI portfolio disclosures.' This implies not for other fund types, though it does not explicitly name alternatives or when-not-to-use. The sibling tools like get_metrics and get_sif_metrics serve as implicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fundAInspect

Get full details for a fund: info, TER, minimum investment, launch date.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesAMFI scheme code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It implies read-only behavior but does not explicitly state side effects, authentication, or data freshness. Not harmful, but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded and contains only essential information. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides a reasonable list of output fields. However, 'info' is vague, and additional clarification on the scope could help, but overall adequate for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter that has its own description. The description adds no extra parameter details, but baseline is 3 due to high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves full fund details and enumerates key fields (TER, minimum investment, launch date). It distinguishes this tool from siblings like get_holdings or get_latest_nav.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, but the purpose is straightforward enough for an agent to infer. Lacks mention of exclusions or preconditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gift_metricsAInspect

Get USD-denominated performance metrics for a GIFT City IFSC fund. Returns trailing returns (1W/1M/3M/6M/1Y/3Y/5Y/10Y/YTD), Sharpe, Sortino, and max drawdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesGIFT City fund code (e.g. GIFT-PP-NASDAQ, GIFT-PP-SP500, GIFT-EDEL-CHINA, GIFT-DSP-GLOBAL)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return metrics but does not mention read-only nature, potential errors for invalid codes, or any side effects. Acceptable for a simple query tool but lacks explicit behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Front-loaded with verb and specific outputs. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one param, no output schema), description covers purpose and return metrics list. Could mention response format or potential missing metrics, but overall sufficient for an AI agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter scheme_code, which already provides examples. Description adds context of USD-denominated but does not enhance parameter meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies tool as retrieving USD-denominated performance metrics for GIFT City IFSC funds, listing specific metrics like trailing returns, Sharpe, Sortino, and max drawdown. Distinguishes from sibling tools like get_metrics (non-GIFT) and get_gift_nav_history (NAV history vs performance).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Context is clear that this tool is for GIFT City funds, implied by name and description. However, no explicit when-not-to-use or alternatives given; agent must infer from sibling names. Still, adequate for a focused tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gift_nav_historyCInspect

Get USD NAV history for a GIFT City IFSC fund. Supports optional date range filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd date in YYYY-MM-DD format (optional)
fromNoStart date in YYYY-MM-DD format (optional)
scheme_codeYesGIFT City fund code (e.g. GIFT-PP-NASDAQ)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden; it lacks details on data freshness, authentication, or effect of no records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with main action, but could add more useful detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description does not explain return format or structure of NAV history; parameters are covered but overall missing output context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3; description adds 'optional date range filtering' but does not enhance parameter meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get USD NAV history for a GIFT City IFSC fund' with specific verb and resource, but it does not differentiate from siblings like get_nav_history or get_gift_metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Only mentions optional date range filtering; no guidance on when to use vs alternatives like get_nav_history or get_latest_nav.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_holdingsBInspect

Get top holdings for a fund with percentage of NAV.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top holdings to return (default: 10, max: 50)
scheme_codeYesAMFI scheme code
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It states the function but omits details like read-only nature, auth requirements, rate limits, or side effects. The description is too sparse to inform the agent about operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It conveys the core purpose efficiently, though it could be slightly expanded without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially explains return content (holdings with % NAV). However, it lacks details on output format, pagination, or sorting. For a simple tool, this is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters have descriptions). The description adds context ('top holdings', 'percentage of NAV') that reinforces parameters but does not significantly extend beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves top holdings of a fund with percentage of NAV, specifying the resource (top holdings) and the action (get). This distinguishes it from siblings like get_fund (basic fund info) and find_funds_holding (likely searches across funds).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_fund or find_funds_holding. It does not mention prerequisites, context, or exclusions, relying solely on the inferred purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_latest_navAInspect

Get the latest NAV for a fund along with the 1-day change percentage.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesAMFI scheme code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses output (NAV and change) but does not mention behavioral aspects like error handling on invalid scheme codes, rate limits, or whether it is read-only. Basic transparency but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, with no redundant words. Efficiently communicates essential purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and low complexity, description sufficiently informs agent of output fields (NAV and change). Lacks format details like date or currency but is complete enough for typical use. Minor gaps but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter 'scheme_code' described as 'AMFI scheme code'. Description adds no further meaning; baseline of 3 applies as schema already documents the parameter adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'get', resource 'latest NAV', and additional output '1-day change percentage'. Differentiates from siblings like get_nav_history (historical data) and get_fund (comprehensive info).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for retrieving current single-point NAV data but lacks explicit guidance on when to use this tool versus alternatives such as get_nav_history or get_fund. No when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_metricsAInspect

Get risk-adjusted metrics for a fund: Sharpe, Sortino, max drawdown, alpha, beta, information ratio, category rank percentile, and TER.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesAMFI scheme code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool 'gets' metrics, implying a read operation, but does not disclose any behavioral traits like authentication requirements, rate limits, or data source freshness. It is adequate but not transparent beyond function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the purpose and lists the returned metrics. No extraneous content, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the output metrics, which is helpful given the absence of an output schema. However, it does not mention error conditions, data availability (e.g., historical period), or prerequisites. For a simple tool with one parameter, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the single parameter (scheme_code) as 'AMFI scheme code' with 100% coverage. The description adds no further meaning about parameter constraints, format, or how to obtain it. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves risk-adjusted metrics for a fund and provides an exhaustive list of specific metrics (Sharpe, Sortino, max drawdown, etc.). This distinguishes it from sibling tools like get_fund (basic info) or get_trailing_returns (raw returns).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies it is for obtaining a specific set of performance statistics, but lacks direction on context (e.g., 'for a full risk profile') or exclusions (e.g., 'not for raw returns').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_nav_historyBInspect

Get NAV history for a fund, optionally filtered by date range.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_dateNoEnd date in YYYY-MM-DD format (optional)
from_dateNoStart date in YYYY-MM-DD format (optional)
scheme_codeYesAMFI scheme code
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose operation type (read-only) but does not. It lacks details on rate limits, pagination, or data scope beyond date filtering.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Efficiently conveys the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema provided, and the description fails to specify return format (e.g., list of objects with date and NAV). Incomplete for a history tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already documented. The description adds no extra meaning, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves NAV history for a fund with optional date range filtering. It effectively distinguishes from siblings like 'get_latest_nav' by specifying 'history'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'get_latest_nav'. The description does not mention prerequisites or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sif_metricsCInspect

Get performance metrics for a SIF strategy.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesSIF scheme code in format SIF-XX (e.g. SIF-01)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description merely states it gets metrics without disclosing any behavioral traits like read-only nature, return format, or side effects. Insufficient for an unannotated tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One short sentence with no redundant words. Efficient but not front-loaded with key action; adequate for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks information about output (no output schema) and does not elaborate on what metrics are returned. For a tool with minimal complexity, this is a notable gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of the single parameter with a clear description, so baseline is 3. The description adds no extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get performance metrics' for 'SIF strategy', making the verb and resource explicit. Distinguishes from siblings like 'get_metrics' by specifying SIF, but could be more precise about what 'performance metrics' includes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives such as 'get_metrics', 'get_latest_nav', or 'get_trailing_returns'. Missing context on prerequisites or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trailing_returnsAInspect

Get trailing returns for 1W, 1M, 3M, 6M, 1Y, 3Y, 5Y periods as percentages.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_codeYesAMFI scheme code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It correctly indicates a read operation returning percentage data but does not mention side effects, permissions, or rate limits. Basic transparency is achieved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, efficient sentence that concisely conveys the tool's purpose, periods, and output unit. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the functional scope (periods, percentages) and leverages schema for the parameter. However, it omits the structure of the return value (e.g., a dict or list), which would be helpful given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (scheme_code described as 'AMFI scheme code'). The description adds no additional context or examples beyond the schema, so it scores baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns trailing returns for specific periods (1W, 1M, 3M, 6M, 1Y, 3Y, 5Y) as percentages, with a specific verb (Get) and resource (trailing returns). It distinguishes from siblings by focusing solely on trailing returns, unlike broader tools like get_metrics or get_fund.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For instance, it does not contrast with get_nav_history or get_metrics, nor does it specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_amcsAInspect

List all AMCs (Asset Management Companies) with fund count, sorted alphabetically.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description transparently states the behavior: listing all, including fund count, and sorting alphabetically. It does not mention performance or auth, but for a simple read operation this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and contains no redundant words. Every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with zero parameters and no output schema, the description provides enough information to understand what the tool does and what to expect (list of AMCs with fund count, sorted). It could be more explicit about the output structure, but given low complexity, it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so schema coverage is 100% by default. The description adds no parameter information because none are needed. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list', the resource 'AMCs', and adds details: 'with fund count, sorted alphabetically'. It is distinct from sibling tools which focus on funds, not AMCs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when to use or alternatives are provided, but the purpose is straightforward due to the simple nature of the tool. Sibling tools are all fund-related, so the context implies use for AMC listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all SEBI mutual fund categories with fund count.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It describes a read-only list operation with no side effects, but lacks detail on performance, pagination, or potential limitations. Adequate for a simple list tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, straightforward sentence that clearly conveys the tool's function with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description explicitly states the return includes categories with fund count, which is sufficient for this simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description need not add parameter meaning; baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all SEBI mutual fund categories and includes fund count. It specifies the resource (categories) and scope (SEBI mutual fund), distinguishing it from sibling tools like get_category_stats and list_funds_in_category.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing categories but provides no explicit guidance on when to use this tool versus alternatives, such as get_category_stats or list_funds_in_category.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_funds_in_categoryAInspect

List all funds in a SEBI category, ranked by a chosen metric.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 100)
sort_byNoMetric to sort by (default: return_1y)return_1y
categoryYesExact fund_type / SEBI category value
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It does not mention that the tool is read-only, nor any details about pagination limits, data freshness, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence of 12 words that efficiently conveys the core purpose without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of a complete schema (100% coverage) and no output schema, the description covers the essential functionality. It lacks mention of the limit parameter but is adequate for a simple listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds value by explaining that results are ranked by a chosen metric (sort_by) and that category refers to a SEBI category, enhancing the semantic meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List', the resource 'funds in a SEBI category', and the action 'ranked by a chosen metric', making the purpose specific and distinguishable from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'compare_funds' or 'screen_funds', nor does it specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_gift_fundsAInspect

List all GIFT City IFSC retail funds (USD-denominated offshore mutual funds available to NRIs and global investors). Returns fund metadata, TER, benchmark, and latest NAV.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses the scope and return data. It does not mention pagination or data freshness, but for a simple list with no parameters, the level of detail is adequate. It fully describes the output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, clearly structured: first sentence describes action and scope, second sentence lists key return fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description completely covers the tool's purpose and output. It is self-contained and sufficient for an agent to decide when to call it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema is empty with 100% coverage, so the description adds meaning by specifying the fund type and return fields, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists all GIFT City IFSC retail funds, specifies their nature (USD-denominated, for NRIs/global investors), and lists return fields (metadata, TER, benchmark, latest NAV). It distinguishes from siblings like list_funds_in_category or search_funds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates use for broad listing of specific funds, but lacks explicit guidance on when to use this tool versus alternatives like search_funds or list_funds_in_category. It provides clear context but no exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sif_strategiesAInspect

List all 57 SIF (Specialised Investment Fund) strategies with AMC, category, and latest NAV.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool returns all 57 strategies with specified fields, implying a read-only, fixed output. No annotations are provided, so the description carries the burden and does so adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, focused sentence with no unnecessary words, efficiently conveying the tool's action and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently explains the return content (AMC, category, latest NAV) and the scope (all 57 strategies), making it complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, but the description adds meaningful return field details beyond the empty schema, enhancing usability.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all 57 SIF strategies with specific fields (AMC, category, latest NAV), distinguishing it from siblings like list_amcs and list_categories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or alternatives, but the purpose is clear enough for an agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

screen_fundsBInspect

Screen and filter funds using quantitative criteria. Returns a ranked list.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
max_terNoMaximum TER in % (optional)
categoryNoFilter by SEBI category (optional)
plan_typeNoPlan type filter (default: direct)direct
min_sharpeNoMinimum Sharpe ratio (optional)
min_return_1yNoMinimum 1-year return in % (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only mentions that it returns a ranked list, omitting critical details like whether it is read-only, required authorizations, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the purpose and output, with no redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 optional parameters and no output schema or annotations, the description is incomplete. It lacks details on ranking criteria, pagination, or output format, which are essential for a screening tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds no additional semantics beyond the schema; it merely says 'quantitative criteria' without elaborating on parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Screen and filter funds') and resource ('funds'), and distinguishes from siblings like 'search_funds' (keyword-based) and 'compare_funds' (comparison) by emphasizing quantitative criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for quantitative screening but does not explicitly state when to use it versus alternatives like 'search_funds' or 'compare_funds', nor does it provide when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_fundsCInspect

Search mutual funds by name keyword. Optionally filter by SEBI category or AMC.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query — fund name or partial name
amcNoFilter by AMC name (optional)
limitNoMax results to return (default: 3, max: 100)
categoryNoFilter by SEBI fund category (optional)
include_sifNoInclude SIF strategies in results (default: false)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It only states basic search and filter capabilities, omitting details like pagination, rate limits, or behavior on empty results. The lack of transparency is a significant gap for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two clear sentences, front-loading the main action. It avoids unnecessary words but could benefit from a slightly more structured breakdown of optional filters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and no output schema or annotations, the description is incomplete. It omits important parameters like limit (pagination) and include_sif (special feature), leaving critical usage details unaddressed. A more complete description would mention all key parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal semantic value by naming the filter parameters (SEBI category, AMC) but does not explain their exact usage, defaults, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search mutual funds by name keyword' with optional filters, specifying the verb (search) and resource (mutual funds). However, it does not explicitly differentiate from sibling tools like screen_funds or list_funds_in_category, which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., screen_funds for advanced screening or list_funds_in_category for browsing categories). No exclusions or context for appropriate use are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources