Skip to main content
Glama

Server Details

SEC EDGAR filings parsed: 8-K body-text classification, 13D activist tagging, S-3 ATM detection.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
jaablon/filingfirehose-python
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: get_filing retrieves a specific filing by accession number, while the three search tools focus on different filing types (13D/13G, 8-K, ATM offerings). No overlap or ambiguity between them.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern: get_filing and search_<type>_filings. The slight difference between 'get' and 'search' is appropriate for the distinct operations (retrieval vs. search).

Tool Count5/5

With 4 tools, the server is well-scoped. It covers both retrieval of a specific filing and search across three important SEC filing categories. The count feels neither too thin nor too heavy for the domain.

Completeness3/5

The tools cover a niche but useful subset of SEC filings. However, there is no general search (e.g., by company name or date range) and no support for other common filings like 10-K or 10-Q. The 72-hour recency limit also restricts coverage. Gaps exist for broader SEC research.

Available Tools

4 tools
get_filingAInspect

Fetch one filing by SEC accession number, regardless of recency.

Useful when an agent has an accession number from a citation or earlier
tool call and needs the parsed details.

Args:
    accession_number: SEC accession in 'XXXXXXXXXX-YY-NNNNNN' format.

Returns:
    JSON for the filing if found in our archive, or {"found": false}.
ParametersJSON Schema
NameRequiredDescriptionDefault
accession_numberYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the return format (JSON with filing data or {"found": false}) and mentions it fetches regardless of recency. No annotations are provided, so the description covers the main behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a front-loaded intro, a usage hint, and structured Args/Returns sections. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, clear output) and sibling search tools, the description is complete. It explains input, output, and when to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides the exact format for the accession_number parameter ('XXXXXXXXXX-YY-NNNNNN'), which is crucial and not in the schema. Since schema coverage is 0%, the description fully compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches one filing by its SEC accession number, with verb 'fetch' and resource 'filing'. It distinguishes from sibling search tools by specifying the unique key.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It gives a specific use case when the agent has an accession number from a citation or earlier call. However, it does not explicitly state when not to use it or contrast with sibling tools, though that is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_13d_filingsAInspect

Search recent SEC Schedule 13D / 13G filings from the past 72 hours.

Args:
    activist: Filter to filings tagged with this activist filer name.
        Substring match, case-insensitive. Examples: 'Saba', 'Starboard',
        'Icahn', 'Elliott', 'Pershing'.
    min_percent: Minimum percent of class disclosed.
    include_amendments: Include 13D/A and 13G/A amendments. Default True.
    include_passive_13g: Include passive Schedule 13G filings. Default False
        (only active 13D / 13D/A returned).
    limit: Max results (1-50, default 25).

Returns:
    JSON list of filings with cusip, percent_of_class, aggregate_amount,
    activist_filers, and an Item 4 purpose excerpt.
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
activistNo
min_percentNo
include_amendmentsNo
include_passive_13gNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully covers behavior: time window (past 72 hours), default inclusion/exclusion of amendments and passive 13G, and the nature of the activist filter (substring, case-insensitive). It also notes the return format with specific fields, which exceeds minimal requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately sized, front-loads the purpose, and uses clear Args/Returns structure. Each sentence adds value, but could be slightly more concise by omitting redundant examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

An output schema exists, so return values are partly covered. The description lists key fields returned (cusip, percent_of_class, etc.), adding context. Given 5 parameters and the time constraint, the description is mostly complete, though edge cases like invalid activist input are not addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has no parameter descriptions (0% coverage), so the description must compensate. It provides precise semantics for each parameter: activist (substring match, examples), min_percent (meaning), include_amendments and include_passive_13g (default behavior and effect), and limit (range). This fully clarifies usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches recent SEC Schedule 13D/13G filings from the past 72 hours. It uses a specific verb and resource, and the sibling tools (get_filing, search_8k_filings, search_atm_offerings) have different purposes, though no explicit differentiation is provided.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description details parameters and defaults but does not provide guidance on when to use this tool versus alternatives. It implicitly expects the agent to infer from the filing type, but no explicit when/when-not instructions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_8k_filingsAInspect

Search recent SEC 8-K filings (current report) from the past 72 hours.

Args:
    items: Comma-separated 8-K item codes to filter on. Examples: '1.05'
        (cybersecurity), '5.02' (officer departure), '8.01' (other events),
        '1.01' (material agreement). Leave None for all 8-Ks.
    suspected_buried_only: If True, return only filings where our
        body-text classifier flagged a suspected misclassification —
        i.e. cyber language under Item 8.01 that should have been 1.05,
        officer-departure language under 8.01 that should have been 5.02.
    limit: Max results (1-50, default 25).

Returns:
    JSON-formatted list of filings. Each includes: accession_number,
    company_name, filed_at, filer_reported_items, detected_items,
    discrepancy_items (in body but not reported), and
    suspected_buried_events (map of reported→suspected).
ParametersJSON Schema
NameRequiredDescriptionDefault
itemsNo
limitNo
suspected_buried_onlyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains the classifier for suspected misclassification and return fields, but omits details like rate limits, authentication, or data source freshness beyond 'past 72 hours'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized with clear 'Args' and 'Returns' sections. Every sentence adds value, no fluff. Front-loaded with purpose and time scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema is described via the Returns section, covering key fields. Covers the main functionality but could mention pagination, time zone for 'past 72 hours', or error handling. Good overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds rich meaning: example item codes, explanation of 'suspected_buried_only' classifier, and limit range. This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Search recent SEC 8-K filings' with a specific time window (past 72 hours), distinguishing it from siblings like 'search_13d_filings' or 'get_filing'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use for 8-K filings but does not explicitly state when to use versus alternatives or provide exclusions. Lacks direct comparison to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_atm_offeringsAInspect

Search recent SEC at-the-market (ATM) equity offerings from the past 72 hours.

Pulls from S-3 / 424B5 family filings where our parser flagged ATM-offering
indicators in the body language.

Args:
    sales_agent: Filter by sales agent name (substring, case-insensitive).
        Examples: 'Cantor Fitzgerald', 'Jefferies', 'Roth Capital',
        'H.C. Wainwright'.
    min_shelf_million_usd: Minimum shelf size in millions USD.
    limit: Max results (1-50, default 25).

Returns:
    JSON list with shelf_size_usd, is_atm, sales_agents, use_of_proceeds_excerpt.
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
sales_agentNo
min_shelf_million_usdNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It mentions data source and temporal scope but does not state if the tool is read-only, idempotent, or any side effects. It is not misleading but incomplete for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, structured into an introduction, Args section with bullet-like formatting, and Returns summary. Every sentence contributes meaningfully with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose, parameters, and return format. It lacks details on data update frequency or pagination, but given the presence of an output schema and the limit parameter, it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description fully documents all three parameters with type, constraints, defaults, and examples, adding significant value beyond the schema. The return format is also specified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches recent SEC ATM equity offerings from the past 72 hours, specifying the filing families and parser method. It is distinct from sibling tools which search other filing types or retrieve filings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the tool's purpose (searching recent ATM offerings) and implicitly distinguishes from siblings by topic. It lacks explicit guidance on when not to use or alternative tools, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.