Helium MCP Server - News, Markets & AI
Server Details
Real-time news with bias scoring, live market data, and AI-powered options pricing
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- connerlambden/helium-mcp
- GitHub Stars
- 0
- Server Listing
- Helium MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 9 of 9 tools scored. Lowest: 3.6/5.
Most tools have distinct purposes: get_all_source_biases, get_source_bias, and get_bias_from_url cover source-level vs article-level bias analysis; search_news and search_balanced_news differentiate raw articles from synthesized stories; get_ticker, get_option_price, and get_top_trading_strategies focus on market data. However, get_all_source_biases and get_source_bias overlap in providing source bias information, which could cause confusion despite different output formats.
Tool names follow a consistent verb_noun pattern (e.g., get_all_source_biases, get_bias_from_url, search_news) with clear actions like 'get' and 'search'. Minor deviations exist, such as get_ticker (singular) versus get_top_trading_strategies (plural), but overall naming is predictable and readable.
With 9 tools, the count is reasonable for the server's broad scope covering news bias analysis, market data, and meme searching. It's slightly heavy but manageable, as each tool addresses a specific aspect of the domain without obvious redundancy, though some overlap in bias tools exists.
The tool set covers key areas like bias analysis (source and article levels), news searching (raw and synthesized), and market data (tickers, options, strategies), but lacks update or delete operations for any domain. For example, there's no way to modify bias scores or update market data, which limits agent workflows to read-only tasks, creating notable gaps for interactive use.
Available Tools
10 toolsget_all_source_biasesAInspect
Get bias scores for every news source in the Helium database.
Returns a list of all sources (active within the last 36 days, with >100 articles analyzed),
sorted by avg_social_shares descending. Use this to compare sources, find the most credible
outlets, identify politically extreme sources, or build a ranked overview of the media landscape.
Each entry contains:
- source_name, slug_name, page_url
- articles_analyzed: total articles analyzed for this source
- avg_social_shares: average social shares per article (proxy for reach/influence)
- emotionality_score (0-10): average emotional intensity of the writing
- prescriptiveness_score (0-10): how much the source tells readers what to think/do
- bias_values: dict mapping classifier key ā integer score (-50 to +50 for bipolar,
0 to +50 for unipolar). These keys are identical to what get_bias_from_url returns,
so you can compare article-level and source-level scores directly.
Political / ideological (bipolar: neg=left pole, pos=right pole):
'liberal conservative bias' neg=liberal, pos=conservative
'libertarian authoritarian bias' neg=libertarian, pos=authoritarian
'dovish hawkish bias' neg=dovish, pos=hawkish
'establishment bias' neg=anti-establishment, pos=pro-establishment
Credibility / quality (bipolar):
'overall credibility' neg=uncredible, pos=credible
'integrity bias' neg=low integrity, pos=high integrity
'article intelligence' neg=low intelligence, pos=high intelligence
'delusion bias' neg=truth-seeking, pos=delusional
'objective subjective bias' neg=objective, pos=subjective
'bearish bullish bias' neg=bearish, pos=bullish
'emotional bias' neg=negative tone, pos=positive tone
Unipolar bias dimensions (higher = more of that trait):
'objective sensational bias' sensationalism
'opinion bias' opinion vs informative
'descriptive prescriptive bias' prescriptive vs descriptive
'political bias' political content
'fearful bias' fear-based framing
'overconfidence bias' overconfidence
'gossip bias' gossip
'manipulation bias' manipulative framing
'ideological bias' ideological rigidity
'conspiracy bias' conspiracy content
'double standard bias' double standards
'virtue signal bias' virtue signaling
'oversimplification bias' oversimplification
'appeal to authority bias' appeal to authority
'begging the question bias' question-begging
'victimization bias' victimization framing
'terrorism bias' terrorism content
'scapegoat bias' scapegoating
'hypocrisy bias' hypocrisy
'suicidal empathy bias' suicidal-empathy framing
'cruelty bias' cruelty
'woke bias' woke framing
'written by AI' AI-written likelihood
'immature bias' immaturity
'circular reasoning bias' circular reasoning
'covering the response bias' covering-the-response tactic
'spam bias' spam-like content
Tip: use get_source_bias for full narrative descriptions and recent articles on a specific source.
Tip: bias_values keys here are identical to those in get_bias_from_url and search_news ā compare them directly.
Warning: get_source_bias returns bias_scores with emoji-prefixed display keys (e.g. 'šµ Liberal <ā> Conservative š“')
that are NOT interchangeable with the plain-text keys used here. Do not cross-reference them.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does an excellent job disclosing behavior: it describes filtering criteria (active within 36 days, >100 articles), sorting (by avg_social_shares descending), return format details, and key compatibility notes with other tools. It doesn't mention rate limits or authentication needs, but covers most operational aspects thoroughly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, return format, bias key explanations, tips, warnings). While comprehensive, some bias dimension explanations could be more concise, but overall it's efficiently organized with zero wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (multiple bias dimensions, compatibility considerations) and the presence of an output schema, the description provides exceptional completeness. It thoroughly explains what the tool does, how results are filtered/sorted, detailed return format, bias key semantics, and crucial interoperability notes with sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description appropriately explains there are no parameters needed for this tool, as it retrieves all sources meeting the specified criteria without requiring user input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves bias scores for all news sources in the Helium database, specifying it returns a list of sources with specific criteria (active within 36 days, >100 articles). It distinguishes from sibling tools like get_source_bias by indicating this is for a comprehensive overview rather than detailed narratives on a specific source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool (to compare sources, find credible outlets, identify extreme sources, build ranked overviews) and when to use alternatives (get_source_bias for full narratives on a specific source). Clear warnings about key differences with other tools prevent misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_bias_from_urlAInspect
Get bias analysis for a specific article by its URL.
Use this when you have a direct link to an article and want to know its political leaning,
credibility, emotionality, and other bias dimensions ā without needing to know the source name first.
On success (found=true), returns:
- title, source, date, link, category
- teaser: article excerpt
- summary: one-sentence AI summary
- context: AI-generated context for the article
- bias_description: narrative description of this specific article's bias
- bias_values: dict of per-dimension bias scores using plain-text keys (same schema as
get_all_source_biases and search_news),
e.g. {"liberal conservative bias": 12.3, "overall credibility": 40.1, "emotional bias": -5.2, ...}
Positive values lean toward the second pole of each dimension (conservative, authoritarian, etc.).
- total_shares: total social shares
- wayback_link: Wayback Machine archive URL if available
- image: article image URL if available
On failure (found=false, HTTP 404):
- found: false
- message: explanation string
The URL is automatically queued for ingestion; retry after ~24 hours.
Tip: if you want source-level bias (not article-level), use get_source_bias instead.
Tip: bias_values keys here use plain-text format (e.g. 'liberal conservative bias') and are
identical to those in get_all_source_biases and search_news. Note: get_source_bias returns
bias_scores with emoji-prefixed display keys ā do not cross-reference them with bias_values here.
Args:
url: Full article URL, e.g. 'https://www.nytimes.com/2024/01/01/us/politics/example.html'.| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does so effectively. It discloses key behavioral traits: success/failure response structures (found=true/false), automatic queuing for ingestion with retry timing (~24 hours), and detailed output format including bias_values schema consistency with other tools. It doesn't mention rate limits or auth needs, but covers most operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose, usage context, success/failure outputs, tips, and args. Every sentence adds value, though it's moderately long due to comprehensive output details. It could be slightly more front-loaded, but the information density is high without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (bias analysis with multiple output dimensions), no annotations, and an output schema (implied by context signals), the description is highly complete. It thoroughly documents both success and failure cases, output fields with examples, behavioral notes (queuing/retry), and sibling tool differentiation. The output schema likely covers structure, so the description appropriately focuses on semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage (no parameter descriptions), so the description must fully compensate. It provides a dedicated 'Args' section with clear semantics: 'Full article URL, e.g. 'https://www.nytimes.com/2024/01/01/us/politics/example.html''. This adds essential context beyond the bare schema, explaining format and purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Get bias analysis for a specific article by its URL.' It specifies the verb ('Get bias analysis'), resource ('article'), and method ('by its URL'), clearly distinguishing it from siblings like get_source_bias (source-level) and search_news (search-based).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this when you have a direct link to an article and want to know its political leaning, credibility, emotionality, and other bias dimensions ā without needing to know the source name first.' It also includes a tip to use get_source_bias for source-level bias, clearly differentiating from alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_historical_options_dataAInspect
Get the full historical options chain for a ticker on a specific date.
Returns the complete options chain including all expirations and contracts,
with bid, ask, mid prices, greeks, and Helium's proprietary model values
(helium_theo, helium_pitm, should_i_buy, should_i_sell, terminal_buy_pl,
terminal_sell_pl, etc.) baked into each contract.
Returns:
- symbol, date, data_source ('recent' or 's3')
- num_expirations: number of distinct expiration dates
- total_contracts: total number of option contracts
- option_chain: dict keyed by expiration index, each value is a list of option contracts
Each contract includes fields like: putCall, symbol, description, bid, ask, mark,
mid_price, strikePrice, expirationDate, daysToExpiration, delta, gamma, theta, vega,
impliedVolatility, openInterest, volume, helium_theo, helium_pitm, should_i_buy,
should_i_sell, terminal_buy_pl, terminal_sell_pl, and more.
Args:
symbol: Ticker symbol, e.g. 'AAPL', 'TSLA', 'SPY'.
date: Date in YYYY-MM-DD format, e.g. '2026-04-10'.| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | ||
| symbol | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it describes the comprehensive nature of the return data ('complete options chain including all expirations and contracts'), mentions proprietary model values, and specifies the return structure including nested data organization. It doesn't mention rate limits, authentication needs, or data freshness considerations, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, starting with the core purpose, then detailing return values, and finally explaining parameters. While comprehensive, some sentences about return field details could be more concise, but overall it's efficiently organized with zero wasted content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical options data with proprietary analytics) and the presence of an output schema, the description provides excellent contextual completeness. It thoroughly explains what data is returned, how it's structured, and includes examples of the proprietary fields. The output schema existence means the description doesn't need to exhaustively document return values, and it focuses appropriately on semantic understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing clear semantic meaning for both parameters: 'symbol' is explained as 'Ticker symbol' with examples ('AAPL', 'TSLA', 'SPY'), and 'date' is explained as 'Date in YYYY-MM-DD format' with an example ('2026-04-10'). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the full historical options chain') and resource ('for a ticker on a specific date'), distinguishing it from sibling tools like 'get_option_price' which likely provides current pricing rather than historical data. The verb+resource combination is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'historical' data and mentioning a 'specific date', suggesting this tool is for historical analysis rather than current market queries. However, it doesn't explicitly state when to use this versus alternatives like 'get_option_price' or provide any exclusion criteria or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_option_priceAInspect
Get Helium's proprietary ML model-predicted price for a specific option contract.
Helium trains per-symbol regression models on historical options data. This tool
looks up the most recent available options chain for the symbol (today or up to
5 days back), finds the exact contract matching strike/expiration/type, and runs
it through that model to produce a predicted fair-value price.
Returns:
- symbol: the ticker
- strike: the strike price used
- expiration: the expiration date used
- option_type: 'call' or 'put'
- predicted_price: Helium's model-predicted option price in dollars
- prob_itm: probability of expiring in the money (0.0ā1.0), or null if model unavailable
- options_data_date: the date of the options chain snapshot the model was run on
(so you know how fresh the underlying market data is)
Throws an error if no options chain data is available for the symbol within the past 5 days,
or if the exact contract (strike/expiration/type combination) does not exist in that chain.
Args:
symbol: Ticker symbol, e.g. 'AAPL', 'SPY'.
strike: Strike price as a number, e.g. 150.0.
expiration: Expiration date as 'YYYY-MM-DD', e.g. '2026-06-20'.
option_type: Must be 'call' or 'put'.| Name | Required | Description | Default |
|---|---|---|---|
| strike | Yes | ||
| symbol | Yes | ||
| expiration | Yes | ||
| option_type | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly explains the tool's behavior: it looks up recent options chain data, runs the contract through a regression model, returns specific fields including predicted price and probability, and throws errors under defined conditions (no data or contract mismatch). This covers key aspects like data freshness, model limitations, and error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with the core purpose, explains the methodology, lists return values with explanations, details error conditions, and specifies parameters with examples. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of an ML-based prediction tool with no annotations, the description is highly complete. It covers the tool's purpose, methodology, return values (even though an output schema exists, it elaborates on semantics), error conditions, and parameter details. This provides a comprehensive understanding for an AI agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all four parameters: symbol as a ticker with examples, strike as a number with examples, expiration as a date string with format and example, and option_type as 'call' or 'put'. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get Helium's proprietary ML model-predicted price for a specific option contract.' It specifies the verb ('get'), resource ('price'), and methodology ('Helium's proprietary ML model-predicted'), distinguishing it from any potential sibling tools that might fetch market prices or other financial data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to obtain a predicted fair-value price for an option contract using Helium's ML model. It mentions prerequisites (requires options chain data within the past 5 days) and error conditions. However, it does not explicitly compare to alternatives or state when not to use it, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_source_biasAInspect
Get comprehensive bias analysis for a news source.
Returns:
- source_name, slug_name, page_url
- articles_analyzed: total articles in the bias database for this source
- avg_social_shares: average social shares per article
- emotionality_score (0-10): how emotional the writing is
- prescriptiveness_score (0-10): how much the source tells readers what to think/do
- bias_scores: dict of all measured bias dimensions with scores (-50 to +50 for bipolar,
0 to +50 for unipolar). WARNING: this endpoint returns emoji-prefixed display keys
(e.g. 'šµ Liberal <ā> Conservative š“') rather than the plain-text keys used by
get_bias_from_url, get_all_source_biases, and search_news (e.g. 'liberal conservative bias').
Do not attempt to cross-reference bias_scores keys here with bias_values keys from other endpoints.
- bias_description: AI-generated overall bias summary narrative
- liberal_conservative_description: narrative on political leaning
- libertarian_authoritarian_description: narrative on authority stance
- signature_phrases: words/phrases uniquely overrepresented vs other sources
- signature_negative_phrases: uniquely negative/alarming phrases
- most_shared_phrases: phrases in their most viral articles
- most_emotional_phrases: phrases used in their most emotional articles
- pays_for_traffic_keywords: keywords this source buys ads for
- similar_sources: sources with the most similar bias profile
- most_different_sources: sources with the most different bias profile
- trends_graph_url: URL to a chart of this source's coverage volume over time
- bias_plot_urls: dict of 2D bias scatter plot image URLs (political_lib_auth, subjective_objective, informative_opinion, oversimplification_factful) ā only present when available
- recent_articles: list of most recent articles with full article fields and per-article bias_values
Throws an error if the source is not found.
Args:
source: Source name (e.g. 'Fox News', 'CNN', 'Reuters') or domain (e.g. 'foxnews.com').
Slug-style input (e.g. 'fox-news') is NOT supported ā use full name or domain only.
recent_articles: Number of recent articles to include (1-50, default 10).| Name | Required | Description | Default |
|---|---|---|---|
| source | Yes | ||
| recent_articles | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing critical behavioral traits: it specifies the error condition ('Throws an error if the source is not found'), warns about key format differences from other endpoints, and details the comprehensive return structure including optional fields and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose, followed by detailed return documentation and parameter explanations. While comprehensive, every sentence serves a clear purpose with minimal redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comprehensive bias analysis with many return fields) and the presence of an output schema, the description is complete: it thoroughly documents the return structure, parameter usage, and behavioral constraints without needing to explain basic return values that the output schema would cover.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics: it explains what 'source' accepts (name, domain, with examples and explicit exclusions) and defines 'recent_articles' (range 1-50, default 10), adding significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get comprehensive bias analysis') and resource ('for a news source'), distinguishing it from siblings like get_bias_from_url (which analyzes URLs) and get_all_source_biases (which lists all sources).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (analyzing a specific news source) and includes a warning about key differences from other bias endpoints, though it doesn't explicitly state when not to use it or name specific alternatives beyond the warning.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tickerAInspect
Get comprehensive data for a stock, ETF, or crypto ticker.
Returns:
- ticker, name, type (e.g. 'stock', 'etf', 'crypto'), industry
- latest_price, page_url
- bullish_case, bearish_case, potential_outcomes, takeaway, analysis_date (AI-generated)
- price_forecast_days, price_forecast_percent, price_forecast_lower/upper_bound_percent (model price forecast)
- future_uncertainty_urls: dict with image URLs for future_uncertainty, term_structure, volatility_surface, return_profile (when available)
- future_uncertainty_last_updated, term_structure_last_updated
- iv_rank_percentile (0-100, IV rank over past year)
- long_vol_call, long_vol_put, short_vol_call, short_vol_put: full option pack dicts (when available)
Throws an error if the ticker is not recognized.
Args:
ticker: Ticker symbol, e.g. 'AAPL', 'AMZN', 'BTC', 'ETH', 'SPY'.| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it returns comprehensive data, includes AI-generated and model forecast elements, provides URLs for additional resources when available, and explicitly states it throws an error for unrecognized tickers. This covers most critical aspects, though it could mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Returns, Throws, Args) and uses bullet points for readability. It is appropriately sized, but some bullet points could be more concise (e.g., listing individual fields like 'bullish_case' might be streamlined). Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (returns diverse data types), no annotations, and an output schema present, the description is highly complete. It thoroughly details return values, error conditions, and parameter usage, compensating for the lack of annotations and low schema coverage. The output schema likely covers return structure, so the description focuses on semantics, which it does effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It adds meaningful semantics by explaining the 'ticker' parameter as a symbol for stocks, ETFs, or crypto, with examples like 'AAPL' and 'BTC'. This clarifies usage beyond the schema's basic string type, though it doesn't detail format constraints like case sensitivity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('Get') and resource ('comprehensive data for a stock, ETF, or crypto ticker'), distinguishing it from siblings like get_option_price or search_news. It explicitly lists the types of data returned, making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the types of tickers supported (stock, ETF, crypto) and noting it throws an error for unrecognized symbols. However, it lacks explicit guidance on when to use this tool versus alternatives like get_option_price or search_news, leaving some context to inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_top_trading_strategiesAInspect
Get the top-ranked short volatility and long volatility option trading strategies.
Returns two ranked lists ā short_volatility (sell premium / theta strategies) and
long_volatility (buy premium / gamma strategies) ā each containing up to `limit` tickers.
Each entry has the same fields as get_ticker:
- ticker, name, latest_price, page_url
- bullish_case, bearish_case, potential_outcomes, takeaway, analysis_date (AI-generated, when available)
- price_forecast_days, price_forecast_percent, price_forecast_lower/upper_bound_percent (when available)
- iv_rank_percentile (0-100, IV rank over past year, when available)
- short_vol_call, short_vol_put: best short volatility option packs (when available)
- long_vol_call, long_vol_put: best long volatility option packs (when available)
Sort options:
- "helium_rank" (default): Helium AI edge score ā best overall expected value
- "odds_of_profit": Highest probability of profit
- "historical_performance": Best annualized historical P&L across backtested trades
- "reward_to_risk": Best reward-to-risk ratio
- "smallest_max_loss": Strategies with the smallest maximum possible loss
Args:
sort: Ranking method (default "helium_rank"). One of: 'helium_rank', 'odds_of_profit',
'historical_performance', 'reward_to_risk', 'smallest_max_loss'.
limit: Number of results per strategy type (1-20, default 5).| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | helium_rank | |
| limit | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns two ranked lists with up to a limit, details the fields in each entry, and explains the sort options. It clarifies that some fields are 'when available,' indicating conditional data. However, it does not mention rate limits, authentication needs, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with the purpose and key output details. It uses bullet points and clear sections for fields and sort options, but could be slightly more concise by avoiding repetition in listing fields (e.g., 'Each entry has the same fields as get_ticker:' followed by a detailed list). Overall, it is efficient and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (2 parameters, no annotations, but with an output schema), the description is complete. It explains the purpose, output structure, parameters, and sort options in detail. The presence of an output schema means the description does not need to explain return values further, and it adequately covers all necessary context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It fully documents both parameters: 'sort' with its options and default, and 'limit' with its range and default. This adds essential meaning beyond the basic schema, making the parameters clear and actionable for users.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the top-ranked short volatility and long volatility option trading strategies.' It specifies the verb ('Get') and the resource ('top-ranked...strategies'), and distinguishes itself from sibling tools like get_ticker by focusing on ranked lists of strategies rather than individual ticker data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by detailing the output structure and sort options, helping users understand when to use this tool for retrieving ranked strategies. However, it lacks explicit guidance on when to choose this over alternatives like get_ticker or search tools, and does not mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_balanced_newsAInspect
Search Helium's balanced news stories ā AI-synthesized articles that aggregate multiple sources.
Unlike search_news (which returns individual RSS articles), this returns Helium's own
synthesized stories: each one draws from multiple sources and includes an AI-written
summary, takeaway, context, evidence breakdown, potential outcomes, and relevant tickers.
Returns a list of stories, each with:
- title, simple_title, date, category
- page_url: full URL to the story on heliumtrades.com
- image: story image URL (when available)
- summary: Helium's synthesized overview
- takeaway: key conclusion
- context: background context
- evidence: numbered evidence items
- potential_outcomes: forward-looking outcomes with probabilities
- relevant_tickers: related stock tickers
- num_sources: number of source articles synthesized
- rank: search relevance score
Args:
query: Search keywords (required).
limit: Max results (1-50, default 10).
category: Filter by category. One of: 'tech', 'politics', 'markets', 'business', 'science'.
days_back: Only include stories from the last N days. 0 means no date filter.| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| category | No | ||
| days_back | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns (a list of stories with detailed fields), the nature of the content (AI-synthesized, aggregated from multiple sources), and includes practical details like URL structure and optional image availability. It doesn't mention rate limits, authentication needs, or pagination behavior, but covers core functionality well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by differentiation from siblings, output details, and parameter explanations. Every sentence adds value with zero wasted text, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no annotations, but with output schema), the description is complete. It covers purpose, sibling differentiation, detailed output structure, and parameter semantics. The existence of an output schema means the description doesn't need to explain return values in depth, and it adequately addresses what's needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides meaningful context for all parameters: explains 'query' as search keywords, 'limit' range and default, 'category' options with enumerated values, and 'days_back' filtering logic. This adds substantial value beyond the bare schema, though it doesn't detail parameter interactions or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for 'balanced news stories' that are 'AI-synthesized articles that aggregate multiple sources.' It explicitly distinguishes this from 'search_news' which returns individual RSS articles, making the purpose specific and differentiated from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Unlike search_news (which returns individual RSS articles), this returns Helium's own synthesized stories.' This directly addresses sibling differentiation with clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_memesAInspect
Search Helium's meme database by text (OCR + caption).
Returns matching memes ranked by relevance. Each result includes:
- id, caption, ocr (text extracted from the image)
- image: full URL to the meme image
- source: origin platform (e.g. 'reddit')
- num_likes: likes/upvotes on the original post
- date, is_video, rank
Args:
query: Search keywords (required). Matched against OCR text and captions.
limit: Max results (1-100, default 20).
days_back: Only include memes from the last N days. 0 means no date filter (default).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| days_back | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that it returns ranked results and lists the fields included, which is helpful behavioral context. However, it doesn't mention rate limits, authentication needs, error conditions, or pagination behavior, leaving gaps for a search operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by return details and parameter explanations. Every sentence adds value, there's no redundancy, and it's appropriately sized for a search tool with 2 parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 2 parameters with good description coverage, and no annotations, the description is mostly complete. It explains the search scope, return format, and parameters well, though could benefit from more behavioral context like rate limits or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'query' matches against OCR text and captions, clarifies that it's required, and specifies that 'limit' has a range (1-100) and default (20) - all information not present in the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search Helium's meme database by text') and distinguishes it from sibling tools by specifying the resource (meme database) and search method (OCR + caption). It's not a tautology of the name and provides concrete details about what it searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it mentions what the tool does, it doesn't specify scenarios where this search would be preferred over other tools or mention any prerequisites or limitations for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_newsAInspect
Search news articles.
Returns a list of matching articles. Each article includes:
- title, source, date, link, category, rank, shares, summary
- bias_values: dict of per-dimension bias scores using plain-text keys (e.g. 'liberal conservative bias'),
same schema as get_bias_from_url and get_all_source_biases (when available)
- context: AI-generated contextual background for the article (when available)
- raw_data: additional raw metadata fields (when available)
Args:
query: Search keywords (required).
limit: Max results (1-100, default 20).
source: Filter by source name, e.g. 'CNN', 'Reuters'.
category: Filter by category. One of: 'trending', 'tech', 'markets', 'politics',
'business', 'science', 'memes'.
days_back: Only include articles from the last N days. 0 means no date filter. Default: 720 (2 years).
min_shares: Minimum total social shares.
sort: Sort order. One of: 'rank' (relevance, default), 'date' (newest), 'shares' (most shared).| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | rank | |
| limit | No | ||
| query | Yes | ||
| source | No | ||
| category | No | ||
| days_back | No | ||
| min_shares | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return format in detail (including bias_values, context, and raw_data fields) and mentions availability conditions ('when available'), which is helpful. However, it doesn't cover behavioral aspects like rate limits, authentication needs, error conditions, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured: it starts with the core purpose, details the return format, then lists parameters with clear explanations. While thorough, every sentence adds value, and it's front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 parameters, no annotations, but with output schema), the description is quite complete. It thoroughly documents parameters and return values. The output schema existence means the description doesn't need to explain return structure, but it does so anyway, which is helpful. Minor gaps include lack of error handling or rate limit information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate fully. It provides comprehensive parameter documentation: explains all 7 parameters, their purposes, constraints (e.g., limit range 1-100), default values, and enum values for 'category' and 'sort'. This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches news articles and returns matching results, which is a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'search_balanced_news' or 'search_memes' beyond the general domain of news.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'search_balanced_news' or 'search_memes'. The description only explains what the tool does, not when it's appropriate or what distinguishes it from similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail ā every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control ā enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management ā store and rotate API keys and OAuth tokens in one place
Change alerts ā get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption ā public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics ā see which tools are being used most, helping you prioritize development and documentation
Direct user feedback ā users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.