Mansa African Markets
Server Details
Live African stock market data — NGX, GSE, NSE, JSE, BRVM and 8 more. Prices, indices and movers.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- HeyZod/mansa-african-markets-mcp
- GitHub Stars
- 0
- Server Listing
- Mansamarkets-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
14 toolsget_african_exchangeARead-onlyIdempotentInspect
Get detailed data for one specific African exchange by ID — index value, change, trading hours, currency, and status.
| Name | Required | Description | Default |
|---|---|---|---|
| exchange | Yes | Exchange ID. Options: nigeria, ghana, kenya, south-africa, ivory-coast, tanzania, zambia, egypt, morocco, botswana, mauritius, zimbabwe, uganda |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover the safety profile (read-only, idempotent, non-destructive), the description adds valuable behavioral context by disclosing what data fields are returned (index value, change, trading hours, currency, status). This helps the agent anticipate the output structure without an output schema, though it omits details like rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get detailed data') and uses an em-dash to append the specific return fields without wasting words. Every clause earns its place by clarifying scope or output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple single-parameter tool with complete schema coverage and safety annotations, the description is sufficiently complete. It partially compensates for the missing output schema by listing the return data fields (index value, status, etc.), though explicitly stating the return type or structure would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (the enum and parameter description are comprehensive), the description meets the baseline expectation. It adds semantic value by connecting the parameter conceptually ('by ID'), but does not need to compensate for missing schema information or add syntax details beyond what the enum already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') + resource ('African exchange') and clarifies scope ('one specific...by ID'). It distinguishes from sibling 'get_african_exchanges' (plural) by emphasizing 'one specific' and differentiates from 'get_african_exchange_stocks'/'get_african_exchange_movers' by listing return fields (index value, trading hours, etc.) that indicate this returns exchange metadata rather than constituent stocks or movers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'one specific...by ID' provides implied usage guidance (use when you need a single exchange's details, not a list), but lacks explicit when/when-not statements or named alternatives. It does not clarify when to use this versus 'get_african_indices' or other specific data tools, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_african_exchange_moversARead-onlyIdempotentInspect
Get the top gainers and/or losers on a specific African exchange today.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Which movers to return. Defaults to both. | |
| limit | No | Number of movers per category. Defaults to 10. | |
| exchange | Yes | Exchange ID. Options: nigeria, ghana, kenya, south-africa, ivory-coast, tanzania, zambia, egypt, morocco, botswana, mauritius, zimbabwe, uganda |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and non-destructive status. The description adds valuable temporal context ('today') indicating daily data freshness and scope limitation, but does not disclose rate limits, caching behavior, or handling of market holidays when no data exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficient sentence with zero waste. The structure is front-loaded with the action ('Get'), followed by the data type ('top gainers and/or losers'), and qualified by scope ('on a specific African exchange today'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (3 parameters, 2 enums) and presence of annotations, the description is complete for selection purposes. It appropriately omits return value details (no output schema exists), though it could briefly mention that it returns ranked stock performance data to set expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all parameters. The description provides semantic mapping by referencing 'gainers and/or losers' (corresponding to the 'type' parameter) and 'specific African exchange' (corresponding to 'exchange'), meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get), resource (top gainers and/or losers), and scope (specific African exchange today). It effectively distinguishes from sibling 'get_pan_african_movers' by emphasizing 'specific African exchange' versus pan-African coverage, and from general exchange tools by specifying 'movers' data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit context through 'specific African exchange,' suggesting use when analyzing a particular market rather than pan-African data. However, it lacks explicit guidance on when to use this versus the NGX-specific tools (get_ngx_top_gainers/losers) or what prerequisites exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_african_exchangesARead-onlyIdempotentInspect
Get a list of all African stock exchanges covered by Mansa API — NGX, GSE, NSE, JSE, BRVM, DSE, LuSE, EGX, CSE, BSE, SEM, ZSE, USE — with index levels, daily change, stocks count, and status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already confirm the operation is read-only, idempotent, and non-destructive. The description adds valuable behavioral context by disclosing the specific data fields returned (index levels, daily change, stocks count, and status), compensating for the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that front-loads the core action ('Get a list...') and efficiently packs in the scope (all African exchanges), specific examples (the 13 tickers), and return data structure. No words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description effectively compensates by detailing the returned data fields. It successfully conveys the tool's scope and behavior for a simple list operation, though it could optionally mention pagination or rate limiting for maximum completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters required, the schema coverage is trivially complete. The description correctly focuses on the return payload rather than parameters, meeting the baseline expectation for parameter-less tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('list of all African stock exchanges'), and explicitly enumerates the 13 exchanges covered (NGX, GSE, NSE, etc.). This clearly distinguishes it from the singular sibling 'get_african_exchange' and stock-specific tools like 'get_african_exchange_stocks'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the plural naming and exhaustive list of exchange tickers imply this is the comprehensive 'get all' endpoint (suggesting use when no specific exchange is targeted), the description lacks explicit guidance on when to use this versus the singular 'get_african_exchange' or other sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_african_exchange_stocksARead-onlyIdempotentInspect
Get stocks on a specific African exchange with prices, change percent, volume, market cap, and sector. Supports filtering and sorting.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of stocks to return. Defaults to 50. | |
| order | No | Sort order. Defaults to desc. | |
| sector | No | Filter by sector (optional). | |
| sort_by | No | Sort by: price, change_pct, volume, market_cap (optional). | |
| exchange | Yes | Exchange ID. Options: nigeria, ghana, kenya, south-africa, ivory-coast, tanzania, zambia, egypt, morocco, botswana, mauritius, zimbabwe, uganda |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation is read-only, idempotent, and non-destructive. The description adds valuable behavioral context by listing the specific data fields returned (prices, change percent, volume, market cap, sector) and explicitly stating that 'filtering and sorting' are supported capabilities, which aligns with the parameter schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. The first sentence front-loads the core purpose and return payload, while the second efficiently covers the operational capabilities. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple flat schema (no nesting), complete parameter documentation, and clear annotations, the description adequately covers the tool's purpose and capabilities. It appropriately omits return value details (no output schema exists to describe), though it could benefit from a note about the 50-result default pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all parameters including defaults. The description mentions 'filtering and sorting' which semantically maps to the sector, sort_by, and order parameters, but does not add syntax, format details, or constraints beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the core action ('Get stocks'), the resource ('specific African exchange'), and enumerates the data fields returned ('prices, change percent, volume, market cap, and sector'). However, it does not explicitly differentiate from sibling tools like 'get_african_exchange_movers' or specific exchange tools like 'get_nasd_stocks'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives. It does not clarify the distinction between this generic exchange query and specialized siblings (e.g., 'get_ngx_top_gainers' for specific gainers/lossers or 'get_african_exchange' for exchange metadata), nor does it mention prerequisites or rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_african_indicesARead-onlyIdempotentInspect
Get all major African market indices in one call — NGX ASI, GSE-CI, NASI, J203, BRVM-CI, LASI and more.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable scope context beyond annotations by listing specific indices covered (NGX ASI, GSE-CI, etc.) and notes efficiency ('in one call'). Does not contradict annotations (readOnlyHint, openWorldHint all consistent with 'Get' operation).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficient sentence with zero waste. Examples appended after em-dash provide maximum information density without clutter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple read-only retrieval tool with rich annotations. Lists representative indices to indicate output scope, though could optionally mention data freshness or response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, which per evaluation rules establishes a baseline of 4. No parameter semantics needed or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('African market indices') with concrete examples (NGX ASI, GSE-CI, etc.) that clearly distinguish it from sibling tools like get_african_exchange or get_pan_african_movers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through the enumerated index examples, but lacks explicit guidance on when to use this versus alternatives like get_ngx_market_overview or get_african_exchange.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nasd_stocksARead-onlyIdempotentInspect
Get all 45 equities on Nigeria's NASD OTC Securities Exchange.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and non-destructive hints. The description adds valuable scope context (the fixed set of '45 equities' and the specific 'Nigeria's NASD OTC Securities Exchange' designation) but omits details on data freshness, rate limiting, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single efficient sentence that front-loads the action and scope. There is no redundant or wasted text; every word contributes to identifying the tool's function and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameter-less read operation with good safety annotations, the description adequately covers the tool's purpose. While no output schema exists, the phrase 'Get all 45 equities' sufficiently indicates the return type for this straightforward listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per guidelines warrants a baseline score of 4. The description correctly implies no filtering or input is required by stating 'Get all' without referencing any parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and identifies the exact resource ('all 45 equities on Nigeria's NASD OTC Securities Exchange'). The mention of 'NASD OTC' effectively distinguishes this tool from NGX-related siblings (get_ngx_all_stocks, etc.) and the generic get_african_exchange_stocks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly differentiates usage by specifying 'NASD OTC' versus the 'NGX' tools in the sibling list, but provides no explicit guidance on when to choose this over get_african_exchange_stocks or whether this is the preferred endpoint for Nigerian OTC data specifically.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_all_stocksARead-onlyIdempotentInspect
Get the full list of all 148+ equities listed on the NGX with current prices, daily change percent, volume, market cap, and sector.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover safety (readOnlyHint, idempotentHint), the description adds valuable behavioral context by disclosing exactly which data fields are returned (current prices, daily change percent, volume, market cap, and sector), compensating for the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence that is front-loaded with the action verb and packs specific scope ('148+'), exchange ('NGX'), and return fields without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description effectively compensates by enumerating the specific data fields returned. It adequately covers the tool's simple (no-param) but data-rich nature, though it could optionally note the return format (e.g., array of objects).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline of 4. The description appropriately requires no parameter explanation since the schema is empty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource scope ('full list of all 148+ equities listed on the NGX') and distinguishes from siblings like get_ngx_stock_price or get_ngx_top_gainers by emphasizing comprehensive coverage ('all', 'full list') versus filtered/single-stock queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'full list' (suggesting use when comprehensive data is needed), but does not explicitly state when NOT to use it or name specific alternatives like get_ngx_stock_price for single-stock queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_disclosuresARead-onlyIdempotentInspect
Get the latest 200 corporate disclosures and regulatory announcements from NGX-listed companies.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable quantitative constraint 'latest 200' indicating hard limit/pagination behavior. However, lacks details on return format, time window for 'latest', or filtering capabilities beyond the limit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb. Every phrase earns its place: 'latest 200' (quantifies result set), 'corporate disclosures and regulatory announcements' (specific content type), 'NGX-listed companies' (market scope). No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and rich annotations covering behavioral safety, the description adequately covers the tool's purpose and scope. Without an output schema, it could benefit from describing the disclosure data structure or timestamp fields, but sufficiently complete for agent selection and basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Per scoring rules, 0-params baseline is 4. Description appropriately requires no additional parameter explanation since the tool is invoked without arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' paired with exact resource 'corporate disclosures and regulatory announcements', quantified scope 'latest 200', and clear geographic/market scope 'NGX-listed companies'. Distinct from sibling price/market data tools (get_ngx_stock_price, get_ngx_market_overview) by focusing on regulatory filings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clear context provided by specifying content type (disclosures vs. prices), implicitly guiding when to use (regulatory/announcement data needs). Lacks explicit comparison to siblings or exclusion criteria (e.g., 'use when you need filings, not price data'), but domain is distinct enough to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_market_overviewARead-onlyIdempotentInspect
Get a real-time overview of the Nigerian Stock Exchange (NGX). Returns the All Share Index (ASI), market capitalisation, trading volume, deals, advancers, and decliners. Use this when the user asks about the Nigerian stock market at a high level.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover the safety profile (readOnly, idempotent, non-destructive), the description adds valuable behavioral context by specifying the data is 'real-time' and explicitly listing the returned fields (ASI, market cap, volume, deals, advancers, decliners). This compensates for the missing output schema. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is optimally front-loaded: purpose (sentence 1), return values (sentence 2), and usage trigger (sentence 3). Every sentence earns its place with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and presence of safety annotations, the description is sufficiently complete. It compensates for the lack of an output schema by enumerating the specific metrics returned. A score of 5 would require additional context like rate limiting or caching behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description correctly does not invent parameter requirements, maintaining consistency with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a real-time overview') and resource ('Nigerian Stock Exchange'). It distinguishes from siblings like get_ngx_stock_price or get_ngx_top_gainers by emphasizing it returns aggregate metrics (ASI, market capitalisation, volume, advancers/decliners) rather than individual securities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance ('Use this when the user asks about the Nigerian stock market at a high level'), which helps the agent select this over specific stock lookup tools. However, it does not explicitly name sibling alternatives or state exclusion conditions (e.g., 'do not use for individual stock queries').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_market_statusARead-onlyIdempotentInspect
Check whether the Nigerian Stock Exchange (NGX) is currently open or closed. Returns OPEN, CLOSED, or ENDOFDAY.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive). Description adds valuable behavioral context not in annotations: specific return values (OPEN, CLOSED, ENDOFDAY) and temporal aspect ('currently'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. First sentence establishes purpose, second discloses return values. Front-loaded with critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple state-checking tool with no parameters, description is complete. Explicitly documents return values (OPEN, CLOSED, ENDOFDAY) compensating for lack of output schema. Annotations provide safety context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters. Per rubric, 0 params = baseline 4. Description correctly does not invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with explicit resource 'Nigerian Stock Exchange (NGX)' and scope 'currently open or closed'. Clearly distinguishes from siblings like get_ngx_stock_price or get_ngx_market_overview by focusing specifically on market open/closed status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context about when to use (to determine NGX trading status) but does not explicitly mention alternatives or when-not-to-use compared to siblings like get_african_exchange or get_ngx_market_overview. Strong implied usage from specificity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_stock_priceARead-onlyIdempotentInspect
Get the latest price and trading history for a specific NGX-listed stock by ticker symbol (e.g. DANGCEM, GTCO, MTNN, ZENITHBANK, ACCESSCORP). Use this when the user asks about the price of a particular Nigerian stock.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | NGX ticker symbol (e.g. DANGCEM, GTCO, MTNN). Case-insensitive. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only, safe, idempotent behavior. The description adds valuable behavioral context beyond annotations by specifying that the tool returns both 'latest price and trading history' (not just current price), clarifying the data payload without contradicting safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes action, resource, and examples; second provides usage trigger. Front-loaded with critical information and no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with rich annotations, the description adequately compensates for missing output schema by specifying return content ('price and trading history'). Complete for complexity level, though could explicitly mention data freshness or pagination if applicable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description and examples. The description reinforces the examples (adding ZENITHBANK and ACCESSCORP to the schema's list) but primarily relies on the schema for parameter semantics, which is appropriate given full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with clear resource 'latest price and trading history' and scope 'NGX-listed stock'. Explicitly distinguishes from siblings like get_ngx_all_stocks and get_african_exchange by specifying 'specific' and 'NGX-listed' (Nigerian) scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive guidance: 'Use this when the user asks about the price of a particular Nigerian stock.' This effectively scopes usage against sibling tools, though it lacks explicit 'when-not' exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_top_gainersARead-onlyIdempotentInspect
Get the top gaining stocks on the NGX today, ranked by percentage price increase. Use this when the user asks which Nigerian stocks are up the most today.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top gainers to return. Defaults to 10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. Description adds valuable behavioral context beyond annotations: the time scope ('today') and ranking methodology ('percentage price increase'). However, it omits error handling, empty result behavior, or data freshness details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines the operation and ranking logic, second provides invocation trigger. Information is front-loaded and appropriately sized for a single-parameter read operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list retrieval tool with optional limit and good safety annotations, the description is adequately complete. It conceptually describes the return value (ranked gainers) despite no output schema. Minor gap: could mention that results are limited/controlled by the limit parameter or describe the return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the 'limit' parameter is fully documented in the schema). Description does not mention the parameter at all, but per rubric, baseline is 3 when schema carries the load. No additional semantic context (e.g., max limit values) is provided in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Get'), resource ('top gaining stocks on the NGX'), and ranking criteria ('ranked by percentage price increase'). Clearly distinguishes from sibling get_ngx_top_losers by specifying 'gaining' and from get_african_exchange_movers by specifying 'NGX' (Nigerian) scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use this when the user asks which Nigerian stocks are up the most today'). Lacks explicit when-NOT-to-use or named alternatives (e.g., doesn't mention get_ngx_top_losers for declining stocks), though the directional specificity ('up') implies the distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ngx_top_losersARead-onlyIdempotentInspect
Get the top losing stocks on the NGX today, ranked by percentage price decline. Use this when the user asks which Nigerian stocks are down the most today.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top losers to return. Defaults to 10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds temporal context ('today') and clarifies the ranking algorithm (percentage decline), but omits details on rate limits, pagination behavior, or empty result handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second provides usage context. Front-loaded with the action verb and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a simple read-only list tool with one optional parameter. Given the lack of output schema, could have mentioned the return format (e.g., 'returns array of stock objects'), but the core functionality is sufficiently described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'limit' parameter, the schema carries the full semantic load. The description correctly stays silent on parameters since the schema is self-documenting, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Get), resource (top losing stocks on NGX), and ranking methodology (percentage price decline). It effectively distinguishes from sibling tool get_ngx_top_gainers by specifying 'losers' and 'decline'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('when the user asks which Nigerian stocks are down the most today'), giving the agent clear invocation triggers. Lacks explicit exclusion guidance (when NOT to use vs. get_ngx_top_gainers), preventing a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pan_african_moversARead-onlyIdempotentInspect
Get the biggest stock movers across ALL African exchanges combined — top gainers and losers by % change from every covered market today.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of movers per category. Defaults to 10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnly, idempotent, openWorld), the description adds critical behavioral context: the metric type ('% change'), the timeframe ('today'), that it returns both gainers and losers simultaneously, and the scope ('every covered market'). It aligns with openWorldHint by noting coverage limitations ('covered market'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero waste. Information is front-loaded ('Get the biggest stock movers'), followed by scope ('across ALL African exchanges'), and specific metrics ('% change', 'today'). Every phrase earns its place in distinguishing scope and behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 optional parameter, no nested objects), 100% schema coverage, and strong annotations, the description adequately covers what the tool returns (categorized lists of gainers/losers). Without an output schema, it could describe the return structure format, but it sufficiently conveys the data content for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description implies the categorical structure ('top gainers and losers') which provides context for the 'limit' parameter (it's per category), but does not explicitly describe parameter syntax or validation rules beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get') and clearly identifies the resource ('biggest stock movers', 'top gainers and losers'). It explicitly distinguishes from siblings by emphasizing 'ALL African exchanges combined' and 'every covered market,' contrasting with single-exchange tools like get_ngx_top_gainers or get_african_exchange_movers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual differentiation through 'ALL' and 'combined' versus presumably single-exchange siblings, implicitly signaling when to use this for pan-African aggregation. However, it lacks explicit 'when-not-to-use' guidance or explicit naming of alternatives like get_african_exchange_movers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.