Skip to main content
Glama

Server Details

Labor market intelligence: WARN layoffs, H-1B visas, SEC filings, bankruptcies. Free tier.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 11 of 11 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific datasets or aggregation levels (e.g., get_company_layoffs vs. get_state_summary vs. get_market_pulse). However, get_state_summary and get_state_intelligence overlap significantly in purpose—both provide state-level data, though the latter includes more datasets. This minor ambiguity could cause misselection between these two tools.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., export_records, get_company_layoffs, search_layoffs). The verbs are clear and appropriate (get, export, search, authenticate), and there are no deviations in naming style across the set.

Tool Count5/5

With 11 tools, the count is well-scoped for a WARN Firehose API server covering layoff data, market intelligence, and risk analysis. Each tool serves a specific function, from authentication to detailed queries and exports, without feeling excessive or insufficient for the domain.

Completeness4/5

The toolset provides comprehensive coverage for querying WARN Act layoffs, state and company analyses, risk signals, and talent pipelines, with clear CRUD-like operations (get, search, export). A minor gap is the lack of a tool for updating or deleting data, but this is reasonable given the server's read-only nature focused on data retrieval and analysis.

Available Tools

11 tools
authenticateBInspect

Authenticate with your WARN Firehose API key. Returns your tier and access level.

Args:
    api_key: Your WARN Firehose API key (starts with wf_)
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns tier and access level, which is useful, but lacks details on authentication requirements (e.g., if this is a one-time setup), error handling, or rate limits. This is a significant gap for an authentication tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the main purpose stated first and parameter details in a clear 'Args:' section. It avoids unnecessary fluff, though it could be slightly more structured for optimal clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (authentication with one parameter) and lack of annotations or output schema, the description is minimally adequate. It covers the purpose and parameter format but misses behavioral aspects like response structure or error cases, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful semantics beyond the input schema: it specifies that the api_key 'starts with wf_', which is not covered in the schema (0% coverage). This compensates well for the low schema coverage, providing crucial format details for the single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Authenticate with your WARN Firehose API key' and 'Returns your tier and access level.' It specifies the verb (authenticate) and resource (WARN Firehose API), though it doesn't explicitly differentiate from siblings, which are all data retrieval tools, making this distinct by nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions authentication but doesn't specify prerequisites for other tools or contextual triggers, leaving usage implicit based on the tool's name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_recordsAInspect

Export WARN records in bulk. Requires Pro tier or higher.

Returns up to 500 records with full field details. For CSV/Parquet
downloads, use the REST API at /api/export/.
Get your API key at warnfirehose.com/account

Args:
    api_key: Your WARN Firehose API key (Pro tier required)
    state: Optional 2-letter state code filter
    company: Optional company name filter (partial match)
    days: Look back this many days (default 90, max 730)
    limit: Max records to return (default 100, max 500)
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
limitNo
stateNo
api_keyYes
companyNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: Pro tier requirement, 500-record limit, default values for days and limit parameters, and the fact that it returns full field details. It also mentions the alternative REST API path for CSV/Parquet downloads, which is useful context. The main gap is lack of information about response format, pagination, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with purpose first, then requirements, then return details, then alternatives, and finally clear parameter documentation. Every sentence earns its place - the API key reminder, the alternative REST API mention, and the detailed parameter explanations all serve distinct purposes. The formatting with bullet-like parameter explanations is clean and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter tool with no annotations and no output schema, the description does an excellent job covering most essential context. It explains the tool's purpose, requirements, limitations, parameters, and alternatives. The main gap is the lack of information about the return format (structure of the 500 records with full field details) and any error conditions, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic information for all 5 parameters. It explains what each parameter does (state filter, company partial match, days lookback, limit max records), provides default values, maximum values, and clarifies that api_key is required with Pro tier. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool exports WARN records in bulk with specific scope (full field details, up to 500 records). It distinguishes from sibling tools like search_layoffs or get_recent_layoffs by emphasizing bulk export functionality and explicitly mentioning CSV/Parquet downloads require a different REST API endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (bulk export of WARN records) and explicitly states when NOT to use it (for CSV/Parquet downloads, use the REST API instead). However, it doesn't provide guidance on when to choose this tool versus sibling tools like get_recent_layoffs or search_layoffs for similar data access needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_layoffsAInspect

Get all WARN Act layoff notices for a specific company.

Args:
    company: Company name to search for (partial match supported)
    api_key: Optional API key for higher rate limits
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo
companyYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'partial match supported' for the company parameter, which is useful behavioral context. However, it does not disclose other important traits such as rate limits (beyond the optional API key hint), pagination, error handling, or what the output contains, leaving significant gaps for a tool that likely returns data lists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a brief, structured 'Args' section that efficiently explains parameters. Every sentence adds value without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description doesn't need to explain return values. However, for a tool with 2 parameters, 0% schema coverage, and no annotations, the description provides adequate purpose and parameter semantics but lacks behavioral details like rate limits or error handling, making it minimally complete but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for both parameters: it explains that 'company' is for searching with partial match support, and 'api_key' is optional for higher rate limits. Since schema description coverage is 0%, this compensates well by providing semantics beyond the bare schema, though it doesn't detail formats or constraints for the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get all WARN Act layoff notices') and resource ('for a specific company'), distinguishing it from sibling tools like 'get_recent_layoffs' or 'search_layoffs' which likely have different scopes or filtering approaches. The mention of 'WARN Act' adds domain-specific precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying 'for a specific company' and mentioning partial match support, which suggests when to use this tool. However, it lacks explicit guidance on when to choose this over alternatives like 'search_layoffs' or 'get_recent_layoffs', and does not state any exclusions or prerequisites beyond the optional API key.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_pulseBInspect

Get a single-call market snapshot across all 6 datasets.

Returns: WARN stats (30d trend), top industries, at-risk companies,
LCA/H-1B counts, DOL claims, SEC filings, bankruptcies, JOLTS snapshot.

Args:
    api_key: Optional API key for higher rate limits
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool returns a snapshot with specific data elements, but lacks details on rate limits (only hints at 'higher rate limits' with api_key), error handling, authentication needs beyond the optional api_key, or whether it's read-only or has side effects. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: it starts with the core purpose, lists return values in a bullet-like format, and ends with parameter details. Every sentence adds value without redundancy, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating 6 datasets) and the presence of an output schema (which likely details return values), the description is reasonably complete. It outlines the scope and data elements returned, and the parameter is adequately explained. However, without annotations, it could benefit from more behavioral context like rate limits or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter 'api_key', explaining it's 'Optional API key for higher rate limits.' This clarifies its purpose beyond what the schema provides (which only indicates it's an optional string with no description). With 0% schema description coverage and only one parameter, this compensation is effective, though it could elaborate on format or sourcing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a single-call market snapshot across all 6 datasets.' This specifies the verb ('Get') and resource ('market snapshot'), and the list of returned data elements further clarifies scope. However, it doesn't explicitly differentiate from sibling tools like 'get_stats' or 'get_state_summary', which might offer overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'single-call market snapshot' but doesn't compare it to other tools like 'get_stats' or 'get_state_summary' that might provide similar or partial data. There's no mention of prerequisites, timing, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_recent_layoffsAInspect

Get the most recent WARN Act layoff notices.

Args:
    days: Look back this many days (default 30)
    state: Optional state filter (2-letter code)
    limit: Max results (default 25, max 100)
    api_key: Optional API key for higher rate limits
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
limitNo
stateNo
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'api_key' for higher rate limits, which hints at rate-limiting behavior, but lacks details on permissions, error handling, or response format. The description is minimal beyond parameter listing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the tool's purpose, followed by a structured parameter list. It avoids unnecessary details, but the parameter explanations are brief and could be more integrated into the flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and an output schema (which handles return values), the description covers the basic purpose and parameters. However, it lacks behavioral context like authentication needs or error scenarios, making it adequate but with gaps for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining each parameter's purpose (e.g., 'days' for look-back period, 'state' as a filter, 'limit' for max results, 'api_key' for rate limits), though it doesn't specify format details like state code validation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('most recent WARN Act layoff notices'), distinguishing it from siblings like 'get_company_layoffs' or 'search_layoffs' by focusing on recent notices rather than company-specific or search-based queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving recent layoff data but does not explicitly state when to use this tool versus alternatives like 'get_company_layoffs' or 'search_layoffs'. No guidance on exclusions or prerequisites is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_risk_signalAInspect

Get companies ranked by composite distress signal across all datasets.

Combines WARN layoff volume/recency, SEC restructuring filings,
bankruptcy filings, and H-1B denial rates into a single risk score.
Levels: Critical (7+), Elevated (4-6), Moderate (2-3), Low (1).
Requires Starter tier or higher. Get your API key at warnfirehose.com/account

Args:
    state: Optional 2-letter state code to filter
    min_score: Minimum risk score (default 3)
    limit: Max results (default 15, max 50)
    api_key: Your WARN Firehose API key (Starter tier required)
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
stateNo
api_keyNo
min_scoreNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation (implied by 'get'), requires Starter tier or higher and an API key, includes rate limits (max 50 results), and explains the risk score levels. It doesn't detail error handling or pagination, but covers essential operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It front-loads the core purpose, then explains the composite scoring, levels, requirements, and parameters in clear sections. Every sentence adds value, though the API key instructions could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage and no annotations, the description does an excellent job explaining inputs, authentication, and scoring logic. Since an output schema exists, it doesn't need to detail return values. It could briefly mention the output format or error cases, but overall it's highly complete for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must fully compensate. It does so by explaining all 4 parameters: state filters by 2-letter code, min_score sets a threshold with default 3, limit controls results with default 15 and max 50, and api_key specifies authentication requirements. This adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get companies ranked by composite distress signal across all datasets.' It specifies the verb ('get'), resource ('companies'), and scope ('ranked by composite distress signal'), distinguishing it from siblings like get_company_layoffs or get_recent_layoffs by focusing on a multi-source risk score rather than raw layoff data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: it combines multiple datasets into a risk score, which implies it's for aggregated risk assessment rather than raw data retrieval. However, it doesn't explicitly state when not to use it or name specific alternatives among siblings, though the distinction is implied by the composite nature.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_state_intelligenceAInspect

Get a unified state profile combining all 6 datasets for a US state.

Returns WARN notices, LCA petitions, H-1B approvals/denials, DOL claims,
bankruptcy matches, JOLTS data, and a composite distress score.
Requires Pro tier or higher. Get your API key at warnfirehose.com/account

Args:
    state_code: Two-letter state abbreviation (e.g. CA, TX, NY)
    api_key: Your WARN Firehose API key (Pro tier required)
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo
state_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and discloses key behavioral traits: it returns specific datasets (WARN notices, LCA petitions, etc.), requires authentication ('Requires Pro tier or higher'), and provides a composite distress score, though it lacks details on rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose, followed by return details and prerequisites, but the API key instruction could be more integrated; overall, most sentences earn their place with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (combining 6 datasets) and no annotations, the description is fairly complete, detailing inputs, outputs, and auth needs, though an output schema exists, so return values need not be explained; it could improve by mentioning data freshness or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description compensates fully by explaining both parameters: 'state_code' as a 'Two-letter state abbreviation' with examples, and 'api_key' as 'Your WARN Firehose API key' with tier requirements, adding clear meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a unified state profile') and resource ('combining all 6 datasets for a US state'), distinguishing it from siblings like get_state_summary or get_stats by specifying the comprehensive nature of the data returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('Get a unified state profile combining all 6 datasets') and mentions prerequisites ('Requires Pro tier or higher'), but does not specify when not to use it or name alternatives among siblings for different data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_state_summaryAInspect

Get a summary of WARN Act layoff data for a specific US state.

Args:
    state: Two-letter state code (e.g. CA, TX, NY, FL)
    api_key: Optional API key for higher rate limits
ParametersJSON Schema
NameRequiredDescriptionDefault
stateYes
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions that the api_key is 'Optional API key for higher rate limits,' which adds useful behavioral context about authentication and rate limiting. However, it doesn't disclose other traits like response format, error handling, or whether the operation is read-only (implied by 'Get' but not explicit).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose clearly, followed by a concise 'Args' section that lists parameters with brief explanations. Every sentence earns its place without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, 1 required) and the presence of an output schema (which handles return values), the description is fairly complete. It covers purpose, parameters, and some behavioral context. However, without annotations, it could benefit from more details on usage scenarios or limitations to fully compensate for the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'state' is a 'Two-letter state code (e.g. CA, TX, NY, FL)' and 'api_key' is 'Optional API key for higher rate limits,' providing clarity beyond the schema's basic types. This covers both parameters adequately, though it doesn't detail format constraints beyond examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('Get a summary') and resource ('WARN Act layoff data for a specific US state'), distinguishing it from siblings like get_company_layoffs or get_recent_layoffs by focusing on state-level summaries rather than company-specific or recent data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for a specific US state,' which helps differentiate it from tools like get_stats or get_market_pulse that might provide broader data. However, it lacks explicit guidance on when to use this tool versus alternatives like get_state_intelligence or search_layoffs, which could offer overlapping or complementary functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsCInspect

Get overall statistics about the WARN Firehose database.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action is to 'Get' statistics, implying a read-only operation, but doesn't specify authentication requirements (e.g., if api_key is optional or mandatory), rate limits, error handling, or what 'overall statistics' entails. This lack of detail makes it inadequate for safe and effective use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core purpose and efficiently conveys the tool's function. Every part of the sentence earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter) and the presence of an output schema (which should detail return values), the description is minimally complete. However, it lacks context on authentication, usage scenarios, and parameter meaning, which are gaps even with the output schema. This results in an average score, as it meets basic needs but leaves important aspects uncovered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter (api_key) with 0% description coverage, and the tool description adds no information about parameters. It doesn't explain what api_key is for, when it's needed, or how it affects the operation. Since schema coverage is low and the description fails to compensate, the score is low.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'overall statistics about the WARN Firehose database', making the purpose specific and understandable. It distinguishes from siblings like get_company_layoffs or get_recent_layoffs by focusing on aggregated statistics rather than specific data queries. However, it doesn't explicitly contrast with all siblings, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., authentication needs), differentiate from similar tools like get_state_summary or get_state_intelligence, or specify use cases. This leaves the agent without context for tool selection, resulting in a minimal score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_talent_pipelineAInspect

Find available talent from recent layoffs, cross-referenced with LCA visa roles.

Shows what occupations/skills each laid-off company was hiring for.
Useful for recruiters targeting skilled workers from recently laid-off companies.
Requires Starter tier or higher. Get your API key at warnfirehose.com/account

Args:
    state: Optional 2-letter state code
    days: Look back this many days (default 90)
    limit: Max results (default 15, max 50)
    api_key: Your WARN Firehose API key (Starter tier required)
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
limitNo
stateNo
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses behavioral traits: it's a read operation (implied by 'find' and 'shows'), requires an API key and Starter tier, and has default values for parameters. However, it lacks details on rate limits, error handling, pagination, or the structure of returned data, which are important for a tool with an output schema but no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first two sentences state the purpose, followed by usage context and prerequisites, then a clear parameter section. Every sentence adds value, though the API key instruction could be slightly more concise. Overall, it's well-structured with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no annotations, but with an output schema), the description is mostly complete. It covers purpose, usage, prerequisites, and parameter semantics. Since an output schema exists, it needn't explain return values. However, it could improve by mentioning behavioral aspects like rate limits or error cases to fully compensate for the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the input schema: it explains that 'state' is an 'Optional 2-letter state code', 'days' is 'Look back this many days (default 90)', 'limit' is 'Max results (default 15, max 50)', and 'api_key' is 'Your WARN Firehose API key (Starter tier required)'. This clarifies the purpose, constraints, and requirements for all parameters effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find available talent from recent layoffs, cross-referenced with LCA visa roles. Shows what occupations/skills each laid-off company was hiring for.' This specifies the verb ('find'), resource ('talent'), and scope ('recent layoffs, cross-referenced with LCA visa roles'), distinguishing it from sibling tools like get_recent_layoffs or get_company_layoffs which likely focus on layoff data alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Useful for recruiters targeting skilled workers from recently laid-off companies.' It also mentions prerequisites: 'Requires Starter tier or higher. Get your API key at warnfirehose.com/account.' However, it does not explicitly state when to use this tool versus alternatives like search_layoffs or get_state_intelligence, which could help differentiate further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_layoffsAInspect

Search WARN Act layoff notices by company name, city, or keyword.

Args:
    query: Search term (company name, city, etc.)
    state: Optional 2-letter state code to filter (e.g. CA, TX, NY)
    limit: Max results to return (default 20, max 100)
    api_key: Optional API key for higher rate limits
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
stateNo
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses some behavioral traits like the default and max for 'limit' and the optional 'api_key' for higher rate limits, but does not cover other aspects such as authentication requirements beyond the API key, rate limits without the key, pagination, or error handling. It adds value but is incomplete for a search tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose clearly, followed by a structured 'Args' section that efficiently details each parameter without waste. Every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a search tool with 4 parameters, no annotations, and an output schema exists), the description is mostly complete. It covers purpose, parameters, and some behavioral context, but lacks details on authentication, rate limits, or error handling. The presence of an output schema means return values need not be explained, but more behavioral transparency would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the input schema by explaining each parameter's purpose: 'query' as a search term for company name, city, etc.; 'state' as an optional 2-letter code filter; 'limit' with default and max values; and 'api_key' for higher rate limits. This fully documents the parameters where the schema lacks descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search WARN Act layoff notices') and the resource ('by company name, city, or keyword'), distinguishing it from siblings like 'get_recent_layoffs' or 'get_state_summary' which likely have different scopes or purposes. The verb 'search' is precise and the domain is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Search WARN Act layoff notices by company name, city, or keyword'), but does not explicitly state when not to use it or name alternatives among the sibling tools. It implies usage for searching rather than retrieving specific or aggregated data, but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources