Skip to main content
Glama

screen_stocks

Screen stocks using technical criteria like RSI, trend scores, EMA positions, and volume spikes to identify oversold bounces, trending patterns, and volume anomalies in the S&P 500 or custom tickers.

Instructions

Screen stocks against technical criteria — find oversold bounces, trending stocks, volume spikes, etc.

Scans the S&P 500 top 100 (or custom tickers) and filters by RSI, trend score, EMA position, and relative volume. Each stock gets a Trend Score from -100 (strong downtrend) to +100 (strong uptrend).

Args: rsi_max: Maximum RSI to filter for oversold stocks (e.g. 30) rsi_min: Minimum RSI to filter for overbought stocks (e.g. 70) trend_min: Minimum trend score (e.g. 15 for uptrend, 40 for strong uptrend) trend_max: Maximum trend score (e.g. -15 for downtrend, -40 for strong downtrend) above_200ema: If true, only stocks above 200-day EMA above_50ema: If true, only stocks above 50-day EMA min_relative_volume: Minimum relative volume vs 20-day avg (e.g. 1.5 = 50% above average) universe: "sp500" (top 100 by market cap) or "etfs" (sector + index ETFs) tickers: Custom list of tickers to screen (overrides universe) max_results: Maximum results to return (default 15)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
rsi_maxNo
rsi_minNo
trend_minNo
trend_maxNo
above_200emaNo
above_50emaNo
min_relative_volumeNo
universeNosp500
tickersNo
max_resultsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains key behaviors: it scans and filters stocks, uses a Trend Score range (-100 to +100), and defaults to 'sp500' universe with a max results limit. However, it does not cover aspects like rate limits, error handling, or data freshness, which are important for a screening tool. The description adds substantial context but leaves some behavioral traits unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a purpose statement, scanning details, and a parameter section. Every sentence adds value, such as explaining the Trend Score range and parameter defaults. However, it could be slightly more front-loaded by moving key behavioral details earlier, and the parameter explanations are verbose but necessary given the lack of schema descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no annotations, 0% schema coverage) and the presence of an output schema (which handles return values), the description is largely complete. It covers purpose, usage, parameters, and key behaviors like scoring. However, it lacks details on output format (implied by the output schema) and could mention performance considerations or data sources for a screening tool, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all 10 parameters, including explanations of what each parameter does (e.g., 'rsi_max: Maximum RSI to filter for oversold stocks'), examples (e.g., 'e.g. 30'), and interactions (e.g., 'tickers overrides universe'). This goes well beyond the basic schema, making parameter usage clear and actionable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('screen stocks against technical criteria') and resources ('S&P 500 top 100 or custom tickers'), distinguishing it from siblings like 'get_stock_quote' or 'get_technical_indicators' by focusing on filtering rather than retrieval or calculation. It explicitly lists the types of criteria (oversold bounces, trending stocks, volume spikes) and the scoring system used.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (e.g., for screening based on technical indicators like RSI and trend scores) and implies alternatives by mentioning specific criteria, but it does not explicitly name when not to use it or point to sibling tools like 'find_breakouts' or 'compare_tickers' for related tasks. The guidance is sufficient for typical use cases but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/slimbiggins007/fintools-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server