Sukkon Mutual Fund MCP Server
Server Details
Connect Sukoon MCP to Claude, Cursor, or any MCP client. Research 14,000+ Indian mutual funds with 20+ years of daily NAV history, benchmarks, and rich performance metrics like CAGR, alpha, beta, Sharpe ratio, drawdown, volatility, and rolling returns — all in plain English. Compare funds, analyze risk, backtest strategies, and uncover insights for free, forever. https://sukoon.money
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 17 of 17 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose, from comparing funds and screening to retrieving specific metrics or listings. No two tools overlap in functionality, ensuring agents can easily select the right one.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., compare_funds, get_metrics, list_categories). The naming is predictable and uniform across the entire set.
Seventeen tools is well-scoped for a mutual fund data server, covering search, screening, comparison, and retrieval of various metrics without being excessive or insufficient.
The tool set covers most common mutual fund operations: search, listing, comparison, NAV, returns, risk metrics, holdings, and benchmarks. Minor gaps like sector allocation or fund documents exist, but core workflows are well-supported.
Available Tools
21 toolscompare_fundsBInspect
Compare 2–10 funds side-by-side on all key metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_codes | Yes | Array of AMFI scheme codes to compare (2–10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must disclose behavior. It mentions 'all key metrics' but does not specify which metrics, output format, or edge cases like invalid fund codes. Lacks behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with no filler, front-loaded with the core action and scope. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal description with 1 param and no output schema. It conveys the purpose but omits details on output, metrics, and caveats. Given the complexity, more context would aid selection among many siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, and the description's '2-10 funds' echoes the schema's minItems/maxItems. The description adds no new semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Compare'), the resource ('funds'), and the scope ('2-10 funds side-by-side on all key metrics'). It distinguishes from siblings like get_fund (singular) or get_metrics (one fund).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like screen_funds or search_funds. Does not mention prerequisites or scenarios where comparison is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_funds_holdingAInspect
Find all funds that hold a specific stock (by name partial match or ISIN).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max funds to return (default: 20) | |
| identifier | Yes | Stock name (partial match) or ISIN |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It discloses the search capability but omits behaviors like pagination, error handling, or response format. Minimal details beyond the core function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words. Information is efficiently packed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-param tool, the description covers purpose and identifier method. However, missing details on output format or what the returned list contains (e.g., fund names, IDs). Adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented. The description adds no new meaning beyond the schema, merely restating 'name partial match or ISIN' already present. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Find' and the resource 'all funds that hold a specific stock'. It specifies matching by name partial match or ISIN, distinguishing it from siblings like 'get_holdings' (which shows holdings of a fund) and 'search_funds' (which searches funds by criteria).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding funds holding a stock, but lacks explicit guidance on when not to use or comparisons to alternatives like 'get_holdings' or 'search_funds'. No exclusions or caveats are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_benchmarkAInspect
Get TRI (Total Return Index) series for a benchmark index. Available: NIFTY 50, NIFTY 100, NIFTY 500, NIFTY MIDCAP 150, NIFTY SMALLCAP 250, and others.
| Name | Required | Description | Default |
|---|---|---|---|
| to_date | No | End date in YYYY-MM-DD format (optional) | |
| from_date | No | Start date in YYYY-MM-DD format (optional) | |
| index_name | Yes | Benchmark index name (e.g. "NIFTY 50", "NIFTY MIDCAP 150") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It uses 'Get' implying a read operation but does not confirm idempotency, authentication needs, rate limits, or return format requirements. This leaves critical behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, front-loaded with the core purpose. Every sentence serves a purpose—first explains the action and data, second provides concrete examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a simple data retrieval tool but lacks details about the output format (e.g., what fields are returned, pagination). Given the absence of an output schema, it should at least hint at the response structure to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all parameters have descriptions). The description adds value by listing example index names and indicating optionality of dates, but it does not provide additional semantics beyond the schema (e.g., no details on date range constraints or default behavior).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves TRI series for a benchmark index, with specific examples like NIFTY 50 and NIFTY 100. The verb 'Get' combined with the resource 'TRI series' makes the purpose unambiguous and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool over alternatives is provided. The description implicitly suggests usage by listing available benchmarks but does not mention exclusions, prerequisites, or comparison with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_category_statsAInspect
Get median and average for all metrics across all funds in a SEBI category.
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | SEBI fund category name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates a read operation (getting stats) but omits details about permissions, rate limits, response format, or whether it includes all metrics or only median/average. While not contradictory, it provides only minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 16 words with no redundancy. Every word contributes purpose: 'Get', 'median and average', 'all metrics', 'across all funds in a SEBI category'. It earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not specify the return structure (e.g., list of metric names with values). It mentions 'all metrics' but does not define which metrics. For a simple one-parameter tool, this is adequate but lacks completeness about what the agent can expect in response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for the 'category' parameter. The description adds extra meaning by specifying 'across all funds in a SEBI category,' reinforcing the parameter's role. This goes beyond the schema's terse 'SEBI fund category name' and helps the agent understand the scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the specific resource: 'median and average for all metrics across all funds in a SEBI category.' This distinguishes it from siblings like list_categories (lists categories) and list_funds_in_category (lists funds), as it focuses on aggregate statistics per category.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for aggregate category statistics but lacks explicit when-to-use or when-not-to-use guidance. It does not mention alternatives like get_metrics for individual funds or compare_funds for cross-fund comparison, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_debt_quantsAInspect
Get debt quantitative metrics for a fund: Macaulay Duration, Modified Duration, average maturity (years), and annualised YTM (%). Available for debt and hybrid funds from AMFI portfolio disclosures.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | AMFI scheme code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the data source (AMFI portfolio disclosures) and lists returned metrics, but does not disclose behavioral traits such as data freshness, authentication requirements, or rate limits. The description is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence (around 20 words) that front-loads the action and outputs. Every word adds value, with no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one required parameter, no output schema, and no annotations, the description is largely complete: it specifies the metrics and applicability. However, the lack of output format details or any behavioral aspects slightly reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the single parameter 'scheme_code' is described as 'AMFI scheme code' in the schema. The description does not add additional meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a specific verb 'Get' and resource 'debt quantitative metrics for a fund', listing exact metrics (Macaulay Duration, Modified Duration, average maturity, YTM). It distinguishes from siblings like get_metrics (likely equity) and get_sif_metrics by stating it is for debt and hybrid funds from AMFI disclosures.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'Available for debt and hybrid funds from AMFI portfolio disclosures.' This implies not for other fund types, though it does not explicitly name alternatives or when-not-to-use. The sibling tools like get_metrics and get_sif_metrics serve as implicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fundAInspect
Get full details for a fund: info, TER, minimum investment, launch date.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | AMFI scheme code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It implies read-only behavior but does not explicitly state side effects, authentication, or data freshness. Not harmful, but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded and contains only essential information. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description provides a reasonable list of output fields. However, 'info' is vague, and additional clarification on the scope could help, but overall adequate for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter that has its own description. The description adds no extra parameter details, but baseline is 3 due to high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full fund details and enumerates key fields (TER, minimum investment, launch date). It distinguishes this tool from siblings like get_holdings or get_latest_nav.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, but the purpose is straightforward enough for an agent to infer. Lacks mention of exclusions or preconditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gift_metricsAInspect
Get USD-denominated performance metrics for a GIFT City IFSC fund. Returns trailing returns (1W/1M/3M/6M/1Y/3Y/5Y/10Y/YTD), Sharpe, Sortino, and max drawdown.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | GIFT City fund code (e.g. GIFT-PP-NASDAQ, GIFT-PP-SP500, GIFT-EDEL-CHINA, GIFT-DSP-GLOBAL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return metrics but does not mention read-only nature, potential errors for invalid codes, or any side effects. Acceptable for a simple query tool but lacks explicit behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Front-loaded with verb and specific outputs. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (one param, no output schema), description covers purpose and return metrics list. Could mention response format or potential missing metrics, but overall sufficient for an AI agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter scheme_code, which already provides examples. Description adds context of USD-denominated but does not enhance parameter meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies tool as retrieving USD-denominated performance metrics for GIFT City IFSC funds, listing specific metrics like trailing returns, Sharpe, Sortino, and max drawdown. Distinguishes from sibling tools like get_metrics (non-GIFT) and get_gift_nav_history (NAV history vs performance).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Context is clear that this tool is for GIFT City funds, implied by name and description. However, no explicit when-not-to-use or alternatives given; agent must infer from sibling names. Still, adequate for a focused tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_holdingsBInspect
Get top holdings for a fund with percentage of NAV.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top holdings to return (default: 10, max: 50) | |
| scheme_code | Yes | AMFI scheme code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It states the function but omits details like read-only nature, auth requirements, rate limits, or side effects. The description is too sparse to inform the agent about operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It conveys the core purpose efficiently, though it could be slightly expanded without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially explains return content (holdings with % NAV). However, it lacks details on output format, pagination, or sorting. For a simple tool, this is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters have descriptions). The description adds context ('top holdings', 'percentage of NAV') that reinforces parameters but does not significantly extend beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves top holdings of a fund with percentage of NAV, specifying the resource (top holdings) and the action (get). This distinguishes it from siblings like get_fund (basic fund info) and find_funds_holding (likely searches across funds).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_fund or find_funds_holding. It does not mention prerequisites, context, or exclusions, relying solely on the inferred purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_metricsAInspect
Get risk-adjusted metrics for a fund: Sharpe, Sortino, max drawdown, alpha, beta, information ratio, category rank percentile, and TER.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | AMFI scheme code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool 'gets' metrics, implying a read operation, but does not disclose any behavioral traits like authentication requirements, rate limits, or data source freshness. It is adequate but not transparent beyond function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently communicates the purpose and lists the returned metrics. No extraneous content, and the most important information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists the output metrics, which is helpful given the absence of an output schema. However, it does not mention error conditions, data availability (e.g., historical period), or prerequisites. For a simple tool with one parameter, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the single parameter (scheme_code) as 'AMFI scheme code' with 100% coverage. The description adds no further meaning about parameter constraints, format, or how to obtain it. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves risk-adjusted metrics for a fund and provides an exhaustive list of specific metrics (Sharpe, Sortino, max drawdown, etc.). This distinguishes it from sibling tools like get_fund (basic info) or get_trailing_returns (raw returns).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies it is for obtaining a specific set of performance statistics, but lacks direction on context (e.g., 'for a full risk profile') or exclusions (e.g., 'not for raw returns').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sif_metricsCInspect
Get performance metrics for a SIF strategy.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | SIF scheme code in format SIF-XX (e.g. SIF-01) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description merely states it gets metrics without disclosing any behavioral traits like read-only nature, return format, or side effects. Insufficient for an unannotated tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One short sentence with no redundant words. Efficient but not front-loaded with key action; adequate for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks information about output (no output schema) and does not elaborate on what metrics are returned. For a tool with minimal complexity, this is a notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of the single parameter with a clear description, so baseline is 3. The description adds no extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get performance metrics' for 'SIF strategy', making the verb and resource explicit. Distinguishes from siblings like 'get_metrics' by specifying SIF, but could be more precise about what 'performance metrics' includes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives such as 'get_metrics', 'get_latest_nav', or 'get_trailing_returns'. Missing context on prerequisites or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trailing_returnsAInspect
Get trailing returns for 1W, 1M, 3M, 6M, 1Y, 3Y, 5Y periods as percentages.
| Name | Required | Description | Default |
|---|---|---|---|
| scheme_code | Yes | AMFI scheme code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It correctly indicates a read operation returning percentage data but does not mention side effects, permissions, or rate limits. Basic transparency is achieved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, efficient sentence that concisely conveys the tool's purpose, periods, and output unit. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the functional scope (periods, percentages) and leverages schema for the parameter. However, it omits the structure of the return value (e.g., a dict or list), which would be helpful given the lack of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (scheme_code described as 'AMFI scheme code'). The description adds no additional context or examples beyond the schema, so it scores baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns trailing returns for specific periods (1W, 1M, 3M, 6M, 1Y, 3Y, 5Y) as percentages, with a specific verb (Get) and resource (trailing returns). It distinguishes from siblings by focusing solely on trailing returns, unlike broader tools like get_metrics or get_fund.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For instance, it does not contrast with get_nav_history or get_metrics, nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_amcsAInspect
List all AMCs (Asset Management Companies) with fund count, sorted alphabetically.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description transparently states the behavior: listing all, including fund count, and sorting alphabetically. It does not mention performance or auth, but for a simple read operation this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded and contains no redundant words. Every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with zero parameters and no output schema, the description provides enough information to understand what the tool does and what to expect (list of AMCs with fund count, sorted). It could be more explicit about the output structure, but given low complexity, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema coverage is 100% by default. The description adds no parameter information because none are needed. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list', the resource 'AMCs', and adds details: 'with fund count, sorted alphabetically'. It is distinct from sibling tools which focus on funds, not AMCs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when to use or alternatives are provided, but the purpose is straightforward due to the simple nature of the tool. Sibling tools are all fund-related, so the context implies use for AMC listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesAInspect
List all SEBI mutual fund categories with fund count.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It describes a read-only list operation with no side effects, but lacks detail on performance, pagination, or potential limitations. Adequate for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, straightforward sentence that clearly conveys the tool's function with no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description explicitly states the return includes categories with fund count, which is sufficient for this simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description need not add parameter meaning; baseline 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all SEBI mutual fund categories and includes fund count. It specifies the resource (categories) and scope (SEBI mutual fund), distinguishing it from sibling tools like get_category_stats and list_funds_in_category.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing categories but provides no explicit guidance on when to use this tool versus alternatives, such as get_category_stats or list_funds_in_category.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_funds_in_categoryAInspect
List all funds in a SEBI category, ranked by a chosen metric.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 100) | |
| sort_by | No | Metric to sort by (default: return_1y) | return_1y |
| category | Yes | Exact fund_type / SEBI category value |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It does not mention that the tool is read-only, nor any details about pagination limits, data freshness, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 12 words that efficiently conveys the core purpose without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of a complete schema (100% coverage) and no output schema, the description covers the essential functionality. It lacks mention of the limit parameter but is adequate for a simple listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining that results are ranked by a chosen metric (sort_by) and that category refers to a SEBI category, enhancing the semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'funds in a SEBI category', and the action 'ranked by a chosen metric', making the purpose specific and distinguishable from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'compare_funds' or 'screen_funds', nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_gift_fundsAInspect
List all GIFT City IFSC retail funds (USD-denominated offshore mutual funds available to NRIs and global investors). Returns fund metadata, TER, benchmark, and latest NAV.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses the scope and return data. It does not mention pagination or data freshness, but for a simple list with no parameters, the level of detail is adequate. It fully describes the output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler, clearly structured: first sentence describes action and scope, second sentence lists key return fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description completely covers the tool's purpose and output. It is self-contained and sufficient for an agent to decide when to call it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is empty with 100% coverage, so the description adds meaning by specifying the fund type and return fields, which goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all GIFT City IFSC retail funds, specifies their nature (USD-denominated, for NRIs/global investors), and lists return fields (metadata, TER, benchmark, latest NAV). It distinguishes from siblings like list_funds_in_category or search_funds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates use for broad listing of specific funds, but lacks explicit guidance on when to use this tool versus alternatives like search_funds or list_funds_in_category. It provides clear context but no exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sif_strategiesAInspect
List all 57 SIF (Specialised Investment Fund) strategies with AMC, category, and latest NAV.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool returns all 57 strategies with specified fields, implying a read-only, fixed output. No annotations are provided, so the description carries the burden and does so adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, focused sentence with no unnecessary words, efficiently conveying the tool's action and output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description sufficiently explains the return content (AMC, category, latest NAV) and the scope (all 57 strategies), making it complete for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, but the description adds meaningful return field details beyond the empty schema, enhancing usability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all 57 SIF strategies with specific fields (AMC, category, latest NAV), distinguishing it from siblings like list_amcs and list_categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use or alternatives, but the purpose is clear enough for an agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screen_fundsBInspect
Screen and filter funds using quantitative criteria. Returns a ranked list.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 50) | |
| max_ter | No | Maximum TER in % (optional) | |
| category | No | Filter by SEBI category (optional) | |
| plan_type | No | Plan type filter (default: direct) | direct |
| min_sharpe | No | Minimum Sharpe ratio (optional) | |
| min_return_1y | No | Minimum 1-year return in % (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only mentions that it returns a ranked list, omitting critical details like whether it is read-only, required authorizations, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the purpose and output, with no redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 optional parameters and no output schema or annotations, the description is incomplete. It lacks details on ranking criteria, pagination, or output format, which are essential for a screening tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds no additional semantics beyond the schema; it merely says 'quantitative criteria' without elaborating on parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Screen and filter funds') and resource ('funds'), and distinguishes from siblings like 'search_funds' (keyword-based) and 'compare_funds' (comparison) by emphasizing quantitative criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for quantitative screening but does not explicitly state when to use it versus alternatives like 'search_funds' or 'compare_funds', nor does it provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_fundsCInspect
Search mutual funds by name keyword. Optionally filter by SEBI category or AMC.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query — fund name or partial name | |
| amc | No | Filter by AMC name (optional) | |
| limit | No | Max results to return (default: 3, max: 100) | |
| category | No | Filter by SEBI fund category (optional) | |
| include_sif | No | Include SIF strategies in results (default: false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must disclose behavioral traits. It only states basic search and filter capabilities, omitting details like pagination, rate limits, or behavior on empty results. The lack of transparency is a significant gap for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two clear sentences, front-loading the main action. It avoids unnecessary words but could benefit from a slightly more structured breakdown of optional filters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and no output schema or annotations, the description is incomplete. It omits important parameters like limit (pagination) and include_sif (special feature), leaving critical usage details unaddressed. A more complete description would mention all key parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal semantic value by naming the filter parameters (SEBI category, AMC) but does not explain their exact usage, defaults, or constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search mutual funds by name keyword' with optional filters, specifying the verb (search) and resource (mutual funds). However, it does not explicitly differentiate from sibling tools like screen_funds or list_funds_in_category, which may have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., screen_funds for advanced screening or list_funds_in_category for browsing categories). No exclusions or context for appropriate use are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!