Skip to main content
Glama

stockscope-mcp

Server Details

SEC EDGAR financial data for AI agents — revenue, margins, filings, company comparisons.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Stewyboy1990/stockscope-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: stock_compare compares two companies, stock_filings retrieves SEC filings, stock_financials provides current financial data, stock_history shows historical trends, stock_insiders covers insider transactions, and stock_sector_peers identifies industry peers. An agent can easily distinguish between them based on their specific functions.

Naming Consistency5/5

All tools follow a consistent 'stock_' prefix with descriptive nouns (compare, filings, financials, history, insiders, sector_peers), using snake_case uniformly. This predictable pattern makes the tool set easy to navigate and understand at a glance.

Tool Count5/5

With 6 tools, the set is well-scoped for a stock analysis server, covering key aspects like financial data, filings, history, comparisons, insider activity, and sector analysis. Each tool earns its place without feeling bloated or insufficient for the domain.

Completeness4/5

The tool set covers most core needs for US public company analysis, including data retrieval, historical trends, filings, and peer comparisons. A minor gap is the lack of tools for real-time stock prices or market news, but agents can work around this with the provided tools for comprehensive financial analysis.

Available Tools

6 tools
stock_compareBInspect

Compare financial data of two US public companies side by side. Shows revenue, net income, margins, assets for both.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_aYesFirst company name or ticker
company_bYesSecond company name or ticker
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what data is shown but doesn't describe important behavioral aspects: whether this is a read-only operation, what time period the data covers (current, historical, trailing twelve months), how recent the data is, whether there are rate limits, authentication requirements, or what format the comparison output takes. For a financial data tool with zero annotation coverage, this leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey the core functionality. The first sentence establishes the main purpose, and the second specifies the metrics included. No wasted words or redundant information, though it could be slightly more structured with clearer separation of scope and output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparison operation with financial metrics), no annotations, and no output schema, the description provides basic completeness but has significant gaps. It covers what the tool does and what data it shows, but lacks crucial context about data recency, time periods, output format, and behavioral constraints. For a financial comparison tool with no structured metadata, the description should do more to compensate for these missing elements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema. The description adds marginal value by specifying these are 'US public companies' and implying they should be comparable entities, but doesn't provide additional semantic context beyond what the schema already states (company name or ticker). The baseline of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare financial data of two US public companies side by side' with specific metrics listed (revenue, net income, margins, assets). It distinguishes from siblings by focusing on comparison rather than filings, financials, history, insiders, or sector peers. However, it doesn't explicitly differentiate from potential overlapping functionality like stock_financials which might provide similar data for single companies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (comparing two US public companies) but doesn't provide explicit guidance on when to use this tool versus alternatives. No mention of when NOT to use it, prerequisites, or specific scenarios where this comparison is most valuable compared to examining individual company data through stock_financials or sector analysis through stock_sector_peers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_filingsAInspect

Get recent SEC filings (10-K, 10-Q, 8-K, S-1) for a US public company. Returns filing type, date, and document links.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name or ticker (e.g. 'Apple', 'AAPL')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions what information is returned (filing type, date, document links), it doesn't cover important behavioral aspects like rate limits, authentication requirements, time range limitations ('recent' is undefined), or pagination behavior. The description provides basic output information but misses key operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and scope, the second specifies the return values. Every word serves a purpose with zero wasted text, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description provides adequate basic information about purpose and returns. However, it lacks details about behavioral constraints (rate limits, authentication), time range definitions, and doesn't fully compensate for the absence of annotations. The description is minimally viable but has clear gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'query' well-documented as accepting company name or ticker. The description doesn't add any parameter information beyond what's in the schema, but since schema coverage is complete, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get recent SEC filings'), specifies the resource ('for a US public company'), and lists the specific filing types (10-K, 10-Q, 8-K, S-1). It distinguishes itself from siblings like stock_financials or stock_history by focusing specifically on regulatory filings rather than financial data or price history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for a US public company') but doesn't explicitly state when to use this tool versus alternatives like stock_financials or stock_insiders. It provides basic scope but lacks explicit guidance on when this tool is preferred over other stock-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_financialsBInspect

Get financial data for a US public company from SEC EDGAR. Returns revenue, net income, assets, liabilities, equity, net margin, stock tickers, exchange listings, SIC industry code. Works with company names or ticker symbols.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name or ticker (e.g. 'Apple', 'AAPL', 'Tesla', 'MSFT')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the data source (SEC EDGAR) and input flexibility (names or tickers), but lacks critical behavioral details: whether this requires authentication, rate limits, what happens with ambiguous queries, whether data is real-time or historical, or error handling. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and return values, the second adds input flexibility. Every element serves a purpose, though it could be slightly more front-loaded by leading with the core function. There's no wasted verbiage, making it appropriately concise for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (financial data retrieval), lack of annotations, and no output schema, the description is partially complete. It covers the core function and input but omits behavioral context (e.g., data freshness, error cases) and output details (e.g., format, time periods). For a tool with no structured output documentation, more guidance on return values would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'query' well-documented in the schema as accepting company names or tickers. The description adds marginal value by reiterating this ('Works with company names or ticker symbols') and providing examples ('Apple', 'AAPL'), but doesn't explain semantics beyond what the schema already states. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get financial data for a US public company from SEC EDGAR' with specific financial metrics listed (revenue, net income, etc.). It distinguishes itself from siblings like stock_history (historical prices) and stock_filings (SEC filings) by focusing on financial statement data. However, it doesn't explicitly contrast with stock_compare or stock_sector_peers, which might also involve financial data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'Works with company names or ticker symbols' and mentioning the data source (SEC EDGAR). However, it doesn't provide explicit guidance on when to use this tool versus alternatives like stock_compare (which might compare financials) or stock_sector_peers (which might provide industry context). The agent must infer appropriate usage from the tool's name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_historyAInspect

Get multi-year financial history and trends for a US public company. Revenue, net income, and assets with CAGR calculation.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name or ticker
yearsNoNumber of years (default 5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what data is retrieved (revenue, net income, assets, CAGR) but lacks details on permissions, rate limits, data sources, or response format. For a tool with no annotation coverage, this is a significant gap in transparency about how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, using a single sentence that efficiently communicates the tool's purpose, scope, and key outputs without unnecessary words. Every part of the sentence earns its place by specifying the action, target, and metrics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial data retrieval with calculations), lack of annotations, and no output schema, the description is minimally adequate but incomplete. It covers what the tool does but misses behavioral aspects like data freshness, error handling, or output structure. With no annotations or output schema, more context would be helpful for the agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for both parameters ('query' as company name or ticker, 'years' with default, min, max). The description adds no additional parameter semantics beyond what the schema provides, such as format examples for 'query' or details on how 'years' affects CAGR calculation. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get multi-year financial history and trends') and resources ('for a US public company'), including key metrics like revenue, net income, assets, and CAGR calculation. It distinguishes itself from siblings like stock_compare (comparisons), stock_filings (regulatory documents), stock_financials (general financials), stock_insiders (insider trading), and stock_sector_peers (sector analysis) by focusing on historical trends and CAGR.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for historical financial analysis of US public companies, but it does not explicitly state when to use this tool versus alternatives like stock_financials (which might provide current financials) or stock_compare (for comparisons). No exclusions or prerequisites are mentioned, leaving some ambiguity for the agent in selecting among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_insidersAInspect

Get recent insider transactions (Form 4 filings) for a US public company. Shows who bought/sold stock and when.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name or ticker (e.g. 'Apple', 'AAPL')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions what data is returned ('who bought/sold stock and when') but lacks critical behavioral details: whether this requires authentication, rate limits, pagination, time range defaults, data freshness, or error conditions. The description provides basic output semantics but misses operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and scope, the second clarifies output content. Every word earns its place, and information is front-loaded appropriately for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with no output schema, the description covers purpose and basic output but lacks behavioral transparency (no annotations) and usage guidelines relative to siblings. It's minimally adequate but has clear gaps in operational context and sibling differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'query' with examples. The description doesn't add any parameter-specific details beyond what's in the schema, such as formatting constraints or search behavior. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get recent insider transactions'), resource ('for a US public company'), and scope ('Form 4 filings'). It distinguishes itself from siblings by focusing on insider transactions rather than comparisons, filings, financials, history, or sector peers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for a US public company') but doesn't explicitly state when to use this tool versus alternatives like 'stock_filings' or 'stock_financials'. No guidance is provided about prerequisites, exclusions, or specific scenarios where this tool is preferred over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stock_sector_peersBInspect

Find companies in the same industry sector (SIC code) as a given company. Useful for competitive analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCompany name or ticker
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool finds peers based on SIC codes, which implies a read-only operation, but doesn't disclose other behavioral traits such as rate limits, authentication needs, data freshness, or what happens with invalid inputs. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured with two sentences: the first states the purpose, and the second provides usage context. Every sentence earns its place by adding value without redundancy. It's front-loaded with the core functionality, making it easy to grasp quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one parameter, no output schema, no annotations), the description is somewhat complete but has gaps. It explains what the tool does and its use case, but without annotations or output schema, it lacks details on behavioral aspects and return values. This makes it adequate but not fully comprehensive for an agent to use confidently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'query' parameter documented as 'Company name or ticker.' The description adds no additional meaning beyond this, as it doesn't elaborate on parameter usage or constraints. According to the rules, with high schema coverage (>80%), the baseline score is 3, which is appropriate here since the schema already provides adequate parameter information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find companies in the same industry sector (SIC code) as a given company.' It specifies the verb ('Find'), resource ('companies'), and mechanism ('same industry sector (SIC code)'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'stock_compare' or 'stock_financials', which might also involve company analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance with 'Useful for competitive analysis,' suggesting a context for when to use this tool. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., 'stock_compare' for direct comparisons or 'stock_financials' for financial data) or any exclusions. The guidance is helpful but lacks specificity about tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.