Indian NSE Stock Insights
Server Details
Analyze any Nifty 500 stock with AI — price action, demand zones, technicals & screener.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- alokbarnwal/nse-public-mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 12 of 12 tools scored. Lowest: 2.4/5.
Each tool targets a distinct aspect of stock analysis: quotes, OHLC data, indicators, patterns (candlestick vs chart), zones, levels, volume, market overview, comparison, and screening. No two tools have overlapping purposes.
Most tools follow a consistent 'get_' prefix pattern (10 out of 12), but 'compare_stocks' and 'screen_stocks' break this pattern with different verbs. Still, the names are clear and predictable.
12 tools is well within the optimal range (3-15) for a focused domain. The number covers essential technical analysis functionalities without fragmentation or bloat.
The tool set covers a comprehensive range of technical analysis: quotes, OHLC, indicators, patterns, support/resistance, zones, volume, screening, comparison, and market overview. No obvious gaps for the stated purpose.
Available Tools
12 toolscompare_stocksAInspect
Side-by-side comparison of 2-5 symbols: quote, RSI/MACD/ADX, trend, 30d return, latest pattern.
| Name | Required | Description | Default |
|---|---|---|---|
| symbols | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It lists the output fields (quote, RSI/MACD/ADX, etc.), implying a read-only operation, but does not explicitly confirm non-destructive behavior, rate limits, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single 13-word sentence that front-loads the main action and key details. No redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has an output schema and 2 simple parameters, the description covers the core functionality and expected output. It could be more explicit about the 2-5 symbol limit and timeframe default, but it is largely complete for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so description must compensate. It hints at the 'symbols' constraint (2-5 symbols) and mentions 'timeframe' via context, but does not fully describe parameter formats or allowed values (e.g., timeframe options).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'compare' and resource 'stocks symbols', and lists specific data points (quote, technical indicators, trend, return, pattern). It distinguishes from siblings like get_stock_quote and get_technical_indicators by combining multiple metrics and comparing up to 5 symbols.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for side-by-side comparison but does not explicitly state when to use this tool versus alternatives (e.g., screen_stocks or individual indicator tools). No usage exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_candlestick_patternsBInspect
Recent single-bar candlestick patterns (HAMMER, DOJI, ENGULFING, ...) newest first.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It discloses the ordering ('newest first') and scope ('single-bar patterns'), but does not mention read-only nature, error conditions, or required permissions. The description adds some context but is incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. It directly conveys the core purpose. However, it could be slightly longer to include parameter hints without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters and 0% schema coverage, the description is too minimal. It does not explain the limit or timeframe defaults, nor does it mention the output schema. The agent likely needs additional context to use this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not elaborate on any parameter (symbol, limit, timeframe). It fails to add meaning beyond the schema's property names, leaving the agent without guidance on parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (get), the resource (single-bar candlestick patterns), and lists examples (HAMMER, DOJI, ENGULFING). It distinguishes from the sibling tool 'get_chart_patterns' by specifying 'single-bar', making the purpose specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_chart_patterns'. It does not mention context, prerequisites, or exclusions, leaving the agent without decision-making support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chart_patternsCInspect
Multi-bar chart patterns (DOUBLE_TOP, HEAD_AND_SHOULDERS, ...). 'ACTIVE' = PENDING/CONFIRMED.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ACTIVE | |
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose key behaviors. It only explains that 'ACTIVE' means PENDING/CONFIRMED. No information about the output, pagination, or what actions the tool performs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short and to the point, with no filler. However, it sacrifices completeness for brevity, which is acceptable but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no annotations, and an output schema, the description should provide more context. It fails to explain the purpose of each parameter or the overall functionality beyond a vague summary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must explain parameters. It only clarifies the 'status' parameter value 'ACTIVE', leaving 'symbol' and 'timeframe' undefined. This is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves multi-bar chart patterns and lists examples like DOUBLE_TOP, HEAD_AND_SHOULDERS. It distinguishes itself from sibling tools like get_candlestick_patterns by focusing on multi-bar patterns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over alternatives. No mention of use cases or prerequisites. The description is entirely silent on context of use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_demand_supply_zonesCInspect
Demand/supply zones (DBR/RBR/RBD/DBD) split into demand_zones and supply_zones.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ACTIVE | |
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It only mentions the output structure but does not reveal any behavioral traits such as computational method, data freshness, or whether it is read-only. This is insufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is concise, though it sacrifices informativeness. For its length, it conveys the core idea efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits parameter explanations and usage context. Although an output schema exists, the description does not leverage it to provide a complete picture. The tool's purpose and invocation requirements are inadequately covered given its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must explain all three parameters (symbol, status, timeframe). It fails to do so entirely, providing no meaning beyond the schema's property names. This is a critical gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as demand/supply zones with specific acronyms (DBR/RBR/RBD/DBD) and states the output is split into demand_zones and supply_zones. This distinguishes it from sibling tools like get_support_resistance. However, it does not explicitly mention that the tool operates on a symbol, which is a key requirement from the schema.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as get_support_resistance or get_chart_patterns. The description lacks context on appropriate scenarios or when not to use it, leaving the agent without decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fibonacci_levelsCInspect
Latest fibonacci retracement swings (UP/DOWN) with 236/382/500/618/786 levels.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. It states 'Latest' implying recency but not how many swings, if it recalculates, or any side effects. Limited transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise, but it sacrifices completeness. Not excessively long, but lacks important details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 2 parameters, no annotations, and output schema existing, the description should cover more. It fails to explain what the output contains or how to interpret the swings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet description adds no meaning for 'symbol' or 'timeframe'. Does not explain what values timeframe accepts or format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides latest Fibonacci retracement swings with specific levels (236/382/500/618/786) and direction (UP/DOWN). However, it does not differentiate from sibling tools like get_support_resistance or get_technical_indicators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No mention of context, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_overviewAInspect
Market-wide snapshot: indices, top_gainers, top_losers, most_active, breadth.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'snapshot' implying read-only, but no disclosure of rate limits, authentication, or data freshness. Minimal behavioral context beyond listing returned fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise at 8 words, lists components clearly. Could be more structured (e.g., bullet points) but efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema covers return values. Description is minimal but sufficient for a zero-param tool. Lacks usage context, but completeness is adequate given schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters (0 params, schema coverage 100%). Baseline score of 4 per guidelines; description adds no parametric detail, but not needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it's a market-wide snapshot and lists specific components (indices, top_gainers, etc.). Differentiates from sibling tools like get_stock_quote (individual stock) and screen_stocks (screening).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives provided. Usage is implied by purpose, but lacks guidance on when to choose this over similar snapshot tools. Sibling context helps but is not in description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ohlc_dataAInspect
Recent OHLCV candles. timeframe in {daily, 15min, 5min}; limit capped at 500; newest first.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides some behavioral details such as the limit cap and ordering, but does not disclose read-only status or potential failure conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence effectively conveys the core purpose, timeframe options, and constraints without extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema and 3 parameters, the description covers essential details; it could be enhanced by explicitly stating it is a read operation, but it is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description adds value by constraining timeframe values and specifying the limit cap, which is not evident from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides 'Recent OHLCV candles' with specific timeframes and ordering, distinguishing it from siblings like get_candlestick_patterns or get_stock_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It indicates recency and ordering but lacks explicit guidance on when to use this versus alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stock_quoteAInspect
Latest price snapshot for an NSE stock — last close, prev close, change %, day H/L, volume.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full transparency burden. It lists output fields (last close, prev close, change %, day H/L, volume), making the return value clear. It does not mention side effects, but as a read-only data tool, this is adequate. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, no wasted words. Front-loaded with key purpose and data fields. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple schema (1 param) and output schema existence, the description covers the tool's behavior and return fields. Could add brief usage note about NSE vs other exchanges, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'symbol' with 0% schema coverage. The description adds context by specifying 'NSE stock', implying the symbol is an NSE ticker, but does not clarify format (e.g., 'RELIANCE' vs 'RELIANCE.NS'). Partially compensates for lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a 'Latest price snapshot for an NSE stock' and lists key data fields (last close, prev close, change %, day H/L, volume). It specifies the resource (stock quote) and market (NSE), distinguishing it from siblings like get_ohlc_data (historical) or compare_stocks (multi-stock).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a quick current snapshot ('Latest price snapshot') but does not explicitly state when to use this tool versus alternatives like get_ohlc_data or get_volume_analysis. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_support_resistanceCInspect
Pivot points + historical support/resistance levels, ordered by price desc.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only mentions the output (pivot points + levels, ordered by price desc). It does not disclose calculation methods, data sources, or limitations (e.g., rounding levels, time period used). Behavioral traits like required permissions or rate limits are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 10-word sentence, which is concise but lacks structure (e.g., separating purpose, output format, parameters). It fails to earn its place by omitting essential details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool returns technical analysis data (pivot points, support/resistance) and has only 2 parameters, the description is too brief. It does not clarify the calculation method or the intended use case, and the existence of an output schema does not compensate for the lack of parameter documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, and the description adds no meaning to the parameters 'symbol' and 'timeframe'. The agent must guess their roles and valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides pivot points and historical support/resistance levels, ordered by price descending. The verb is implied (get) and the resource is specific. However, it does not explicitly differentiate from sibling tools like get_demand_supply_zones, which might overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives no guidance on when to use this tool versus alternatives like get_fibonacci_levels or get_demand_supply_zones. There is no mention of appropriate contexts or preconditions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_technical_indicatorsCInspect
Latest technical indicators (EMAs, SMAs, RSI, MACD, BB, ADX, Stoch, ATR, OBV, VWAP).
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | ||
| timeframe | No | daily |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must convey behavioral traits. It only lists indicators and does not disclose that this is a read operation, the data freshness, or any side effects. The output schema exists but the description adds no context about what is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that lists indicators concisely. It avoids verbosity but could benefit from a verb and clearer structure. Nonetheless, it is efficient with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of multiple indicators and the presence of an output schema, the description is too brief. It does not explain the return format, the coverage (e.g., does it return all listed indicators at once?), or the effect of the timeframe parameter. This is insufficient for an AI agent to fully understand the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage for parameters. The description does not explain the symbols or timeframe parameter, though symbol is self-explanatory and timeframe has a default. No additional meaning is provided beyond what the schema implies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description lists specific technical indicators (EMAs, SMAs, RSI, etc.), clearly indicating the tool returns these values. It distinguishes from sibling tools like get_candlestick_patterns or get_support_resistance which focus on patterns or levels. However, it lacks a strong action verb and does not explicitly state that it computes or retrieves the latest values.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like get_ohlc_data or get_volume_analysis. The description does not explain the purpose or context, leaving the agent to infer usage from the list of indicators alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_volume_analysisBInspect
Volume diagnostic: top 5min hotspots + peak slot + OBV 20-day trend + today/20-day volume ratio.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose behavioral traits such as authentication requirements, rate limits, data source, or error handling for invalid symbols. Only the output components are listed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no redundancy, listing the key diagnostic components efficiently. It is appropriately sized for a simple tool, though could benefit from slightly more context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and an output schema, the description adequately outlines the outputs but lacks context on how to interpret them or when to combine with other tools. Missing usage guidance and behavioral details make it incomplete for a well-rounded description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not elaborate on the 'symbol' parameter beyond its existence in the schema. It fails to add context like expected format or examples, which is critical given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a volume diagnostic tool and lists specific outputs (top 5min hotspots, peak slot, OBV 20-day trend, volume ratio). However, it does not explicitly differentiate from sibling tools like get_technical_indicators which may also include volume analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for volume-specific analysis but provides no explicit guidance on when to use this tool versus alternatives such as get_candlestick_patterns or get_support_resistance. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screen_stocksBInspect
Filter the active universe. Keys: rsi_min, rsi_max, adx_min, trend, near_demand_zone, pattern, min_volume_ratio, sector. At least one required.
| Name | Required | Description | Default |
|---|---|---|---|
| filters | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must bear full transparency burden. It lacks detail on side effects, rate limits, or what happens with no matches. The tool implies read-only operation, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded purpose, efficient listing of keys, and explicit requirement. Ideal length for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and presence of an output schema, the description is adequate but lacks information about output format (e.g., what fields are returned). Still, it covers basic operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema is nearly empty (filters object with additionalProperties true). The description adds critical value by listing specific valid keys and requiring at least one, compensating for 0% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool filters the active universe, with a specific verb 'Filter' and resource. It distinguishes from sibling tools like get_stock_quote or compare_stocks by indicating it screens a universe, though not explicitly differentiating.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage hint: 'At least one required' for filter keys. However, it does not guide when to choose screen_stocks over siblings like compare_stocks or get_technical_indicators.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!