WARN Firehose
Server Details
Real-time labor market intelligence from 6 federal datasets. Track mass layoffs, visa petitions, SEC restructuring filings, bankruptcy cases, unemployment claims, and job market turnover across all 50 US states. Includes proprietary risk signal scoring that cross-references WARN + SEC + bankruptcy data.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 11 of 11 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes targeting specific data types or operations, such as export_records for bulk export, get_risk_signal for risk scoring, and get_talent_pipeline for recruitment insights. However, get_recent_layoffs and search_layoffs could overlap in retrieving layoff data, potentially causing confusion if an agent needs to choose between them for recent notices.
All tool names follow a consistent verb_noun pattern using snake_case, such as authenticate, export_records, get_company_layoffs, and get_state_intelligence. This uniformity makes the set predictable and easy to navigate, with no deviations in naming conventions.
With 11 tools, the count is well-scoped for a WARN Firehose server covering authentication, data export, layoff queries, risk analysis, state intelligence, statistics, and talent pipeline insights. Each tool serves a clear function without redundancy, fitting the domain's complexity appropriately.
The tool surface provides comprehensive coverage for WARN Act data and related market intelligence, including CRUD-like operations (e.g., get, search, export) and advanced analytics (e.g., risk signals, talent pipelines). A minor gap exists in update or delete operations, but this is reasonable given the server's focus on read-only data access and analysis.
Available Tools
11 toolsauthenticateAInspect
Authenticate with your WARN Firehose API key. Returns your tier and access level.
Args:
api_key: Your WARN Firehose API key (starts with wf_)| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool authenticates and returns tier/access level, which covers the core behavior. However, it lacks details on error handling (e.g., invalid keys), rate limits, persistence of authentication, or side effects. The description doesn't contradict annotations (none exist), but it's minimal for a tool that likely governs access to other operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the purpose and outcome, followed by a brief parameter explanation. It avoids redundancy, though the 'Args:' section is slightly informal. Every sentence adds value, making it efficient for an authentication tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (authentication with one parameter) and lack of annotations/output schema, the description is moderately complete. It covers the basic operation and parameter, but for a tool that likely controls access to sibling data tools, it should mention prerequisites (e.g., must be called first) or error cases. Without output schema, it hints at returns but doesn't detail the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% coverage. It explains the 'api_key' parameter as 'Your WARN Firehose API key (starts with wf_)', providing format and context not in the schema. With only one parameter and no schema descriptions, this compensates well, though it could note if the key is stored or validated further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Authenticate with your WARN Firehose API key. Returns your tier and access level.' It specifies the verb ('authenticate'), resource ('WARN Firehose API key'), and outcome. However, it doesn't explicitly differentiate from sibling tools (all of which appear to be data retrieval tools), though authentication is clearly distinct in function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it authenticates with an API key and returns access information, suggesting it should be used for initial setup or verification. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., whether it's required before calling other tools, or if it's a one-time operation). The sibling tools are all data queries, so the distinction is intuitive but not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
export_recordsAInspect
Export WARN records in bulk. Requires Pro tier or higher.
Returns up to 500 records with full field details. For CSV/Parquet
downloads, use the REST API at /api/export/.
Get your API key at warnfirehose.com/account
Args:
api_key: Your WARN Firehose API key (Pro tier required)
state: Optional 2-letter state code filter
company: Optional company name filter (partial match)
days: Look back this many days (default 90, max 730)
limit: Max records to return (default 100, max 500)| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| limit | No | ||
| state | No | ||
| api_key | Yes | ||
| company | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses important behavioral traits: Pro tier requirement, returns up to 500 records, includes default/max values for parameters, and mentions alternative download methods. It doesn't cover error handling or rate limits, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose first, then requirements, return details, alternatives, and parameter explanations. It's appropriately sized with no wasted sentences, though the API key acquisition note could be slightly more integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, 0% schema coverage, no annotations, and no output schema, the description does an excellent job covering operational requirements, parameter semantics, and return limits. The main gap is lack of output format details beyond 'full field details', but given the context, it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description fully compensates by explaining all 5 parameters in detail. It clarifies that 'state' is a 2-letter code filter, 'company' uses partial match, 'days' has default 90 and max 730, and 'limit' has default 100 and max 500. This adds crucial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Export WARN records in bulk' with 'full field details'. It specifies the resource (WARN records) and action (export in bulk), but doesn't explicitly differentiate from sibling tools like 'search_layoffs' or 'get_recent_layoffs' that might also retrieve records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Requires Pro tier or higher' and mentions an alternative for CSV/Parquet downloads via REST API. However, it doesn't explicitly state when to use this tool versus sibling tools like 'search_layoffs' or 'get_recent_layoffs' for retrieving records, leaving the choice implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_company_layoffsAInspect
Get all WARN Act layoff notices for a specific company.
Args:
company: Company name to search for (partial match supported)
api_key: Optional API key for higher rate limits| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | ||
| company | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions 'partial match supported' for the company parameter, which adds useful behavioral context, but lacks details on rate limits (only hints at 'higher rate limits' with api_key), error handling, or response format, leaving gaps for a tool with potential API constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences and a structured 'Args' section, front-loading the purpose. It avoids redundancy, but the 'Args' formatting could be more integrated for smoother reading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description need not explain return values. It covers the purpose and parameters adequately, but with no annotations and behavioral gaps like rate limit specifics, it could be more complete for a data retrieval tool with API dependencies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains that 'company' is for searching with partial match support and 'api_key' is optional for higher rate limits, adding meaningful semantics beyond the bare schema, though it could detail format or examples for better clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get all WARN Act layoff notices') and resource ('for a specific company'), distinguishing it from siblings like 'get_recent_layoffs' or 'search_layoffs' by focusing on company-specific retrieval rather than general or recent notices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving company-specific layoff data but does not explicitly state when to use this tool versus alternatives like 'search_layoffs' or 'get_recent_layoffs', nor does it mention prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_pulseAInspect
Get a single-call market snapshot across all 6 datasets.
Returns: WARN stats (30d trend), top industries, at-risk companies,
LCA/H-1B counts, DOL claims, SEC filings, bankruptcies, JOLTS snapshot.
Args:
api_key: Optional API key for higher rate limits| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the tool's comprehensive return structure and mentions rate limit implications for the api_key parameter, but doesn't cover authentication needs, error handling, or data freshness. It adds value beyond basic function but misses key behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by a structured list of returns and parameter explanation. Every sentence adds value with zero waste, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating 6 datasets) and the presence of an output schema, the description is largely complete. It outlines the return scope and parameter use, though it could better address when this tool is preferable over more specific siblings given the server's context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only one parameter, the description fully compensates by explaining api_key's purpose ('for higher rate limits') and optionality. It adds meaningful context not present in the schema, though it could specify format or source requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a single-call market snapshot') and resource ('across all 6 datasets'), distinguishing it from siblings like get_company_layoffs or get_state_summary by emphasizing comprehensive multi-dataset coverage in one call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a broad market overview, but lacks explicit guidance on when to use alternatives like get_stats or get_state_intelligence. It provides some context by mentioning 'single-call' efficiency versus more focused tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_recent_layoffsAInspect
Get the most recent WARN Act layoff notices.
Args:
days: Look back this many days (default 30)
state: Optional state filter (2-letter code)
limit: Max results (default 25, max 100)
api_key: Optional API key for higher rate limits| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| limit | No | ||
| state | No | ||
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions an optional API key for higher rate limits, which adds some context about performance traits, but it fails to describe critical behaviors such as authentication requirements, error handling, response format, or whether the operation is read-only or has side effects. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a structured list of parameters with key details like defaults and constraints. Every sentence earns its place by providing essential information without redundancy, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is an output schema (which handles return values), no annotations, and moderate complexity with 4 parameters, the description is partially complete. It covers the purpose and parameters well but lacks behavioral context like authentication needs or error handling. For a tool with siblings and no annotations, it should do more to guide usage and disclose operational traits to be fully adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining parameters. It provides clear semantics for all four parameters: 'days' specifies the lookback period with a default, 'state' allows filtering by 2-letter code, 'limit' sets max results with a default and maximum, and 'api_key' indicates optional use for rate limits. This adds substantial meaning beyond the bare schema, though it could benefit from more detail on state code formats or API key usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('most recent WARN Act layoff notices'), distinguishing it from siblings like 'get_company_layoffs' or 'search_layoffs' by focusing on recency rather than company-specific or search-based queries. It immediately communicates the tool's core function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving recent layoff data, but it does not explicitly state when to use this tool versus alternatives like 'get_company_layoffs' or 'search_layoffs'. No guidance is provided on prerequisites, exclusions, or specific scenarios where this tool is preferred over siblings, leaving the agent to infer context from the tool name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_risk_signalAInspect
Get companies ranked by composite distress signal across all datasets.
Combines WARN layoff volume/recency, SEC restructuring filings,
bankruptcy filings, and H-1B denial rates into a single risk score.
Levels: Critical (7+), Elevated (4-6), Moderate (2-3), Low (1).
Requires Starter tier or higher. Get your API key at warnfirehose.com/account
Args:
state: Optional 2-letter state code to filter
min_score: Minimum risk score (default 3)
limit: Max results (default 15, max 50)
api_key: Your WARN Firehose API key (Starter tier required)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | No | ||
| api_key | No | ||
| min_score | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation (implied by 'Get'), requires authentication ('api_key' and 'Starter tier required'), and includes rate limits ('limit: Max results (default 15, max 50)'). It also explains the risk score levels, adding valuable context. However, it lacks details on error handling or response format, which could be improved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with a clear purpose statement followed by details on risk score composition, usage requirements, and parameter explanations. Every sentence adds value, but it could be slightly more front-loaded by moving the parameter details to a separate section or bullet points for better readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a 4-parameter tool with no annotations but an output schema, the description is mostly complete. It covers purpose, usage prerequisites, parameter semantics, and behavioral aspects like authentication and limits. The presence of an output schema reduces the need to explain return values, but the description could benefit from mentioning the output format or example response to enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It does so by clearly explaining all four parameters: 'state' (Optional 2-letter state code to filter), 'min_score' (Minimum risk score with default 3), 'limit' (Max results with default 15, max 50), and 'api_key' (Your WARN Firehose API key with Starter tier required). This adds essential meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get companies ranked by composite distress signal across all datasets.' It specifies the verb ('Get'), resource ('companies ranked by composite distress signal'), and scope ('across all datasets'), distinguishing it from siblings like get_company_layoffs or get_recent_layoffs by focusing on a composite risk score rather than individual datasets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by mentioning 'Requires Starter tier or higher' and directing users to 'Get your API key at warnfirehose.com/account,' which helps set prerequisites. However, it does not explicitly state when to use this tool versus alternatives like get_state_summary or get_market_pulse, leaving some ambiguity in sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_state_intelligenceAInspect
Get a unified state profile combining all 6 datasets for a US state.
Returns WARN notices, LCA petitions, H-1B approvals/denials, DOL claims,
bankruptcy matches, JOLTS data, and a composite distress score.
Requires Pro tier or higher. Get your API key at warnfirehose.com/account
Args:
state_code: Two-letter state abbreviation (e.g. CA, TX, NY)
api_key: Your WARN Firehose API key (Pro tier required)| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | ||
| state_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the return content (multiple datasets and a composite distress score) and adds important context: authentication requirements (Pro tier API key) and where to obtain it. However, it lacks details on rate limits, error handling, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by details on returns, requirements, and parameters in a structured format. Every sentence adds value without redundancy, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (2 parameters, no annotations, but has output schema), the description is mostly complete. It covers purpose, usage context, parameters, and authentication needs. Since an output schema exists, it need not explain return values in detail, but could improve by mentioning data formats or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for both parameters: state_code is explained as 'Two-letter state abbreviation (e.g. CA, TX, NY)' and api_key as 'Your WARN Firehose API key (Pro tier required)'. This clarifies usage beyond the bare schema, though it could provide more on format constraints or optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'unified state profile combining all 6 datasets for a US state', specifying it includes WARN notices, LCA petitions, H-1B approvals/denials, DOL claims, bankruptcy matches, JOLTS data, and a composite distress score. This distinguishes it from sibling tools like get_state_summary or get_stats by emphasizing comprehensive data integration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to obtain a unified profile for a US state with multiple datasets. It mentions 'Requires Pro tier or higher' and directs users to get an API key, but does not explicitly state when not to use it or name alternatives among siblings, such as get_state_summary for a simpler view.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_state_summaryAInspect
Get a summary of WARN Act layoff data for a specific US state.
Args:
state: Two-letter state code (e.g. CA, TX, NY, FL)
api_key: Optional API key for higher rate limits| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | ||
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that api_key is optional for higher rate limits, which adds some context about rate limiting. However, it lacks details on permissions, response format, error handling, or whether the operation is read-only or has side effects, leaving significant gaps for a tool that fetches data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: one stating the purpose and another detailing parameters. It is front-loaded with the main purpose, and the parameter explanations are concise, though the formatting with 'Args:' could be slightly more integrated for better flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description does not need to explain return values. However, with no annotations and 0% schema description coverage, it partially compensates with parameter semantics but lacks behavioral context like authentication needs or data freshness. This makes it adequate but incomplete for a data-fetching tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'state' is a two-letter US state code with examples (CA, TX, NY, FL) and that 'api_key' is optional for higher rate limits, clarifying usage beyond the schema's basic types. This covers both parameters effectively, though it could provide more detail on format constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'summary of WARN Act layoff data for a specific US state,' making the purpose specific and actionable. It distinguishes from siblings like get_company_layoffs or get_recent_layoffs by focusing on state-level summaries rather than company-specific or time-filtered data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for state-specific summaries but does not explicitly state when to use this tool versus alternatives like get_state_intelligence or get_stats. It provides no guidance on prerequisites or exclusions, leaving the agent to infer context from sibling tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsBInspect
Get overall statistics about the WARN Firehose database.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states it retrieves statistics but doesn't disclose behavioral traits such as authentication needs (implied by the api_key parameter), rate limits, response format, or whether it's read-only. This is a significant gap for a tool with parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's appropriately sized and front-loaded, clearly stating the tool's purpose without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter) and the presence of an output schema, the description is somewhat complete but lacks context. It doesn't explain what 'overall statistics' entail or how it differs from sibling tools, making it adequate but with clear gaps in usage and behavioral transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning beyond the input schema, which has 0% description coverage for the single parameter 'api_key'. However, with only one parameter and an output schema present, the baseline is 3, as the schema handles parameter documentation minimally, but the description doesn't compensate for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('overall statistics about the WARN Firehose database'), making the purpose understandable. However, it doesn't differentiate this tool from siblings like 'get_state_summary' or 'get_state_intelligence', which might also retrieve statistical data, so it lacks sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_state_summary' or 'get_recent_layoffs', there's no indication of context, prerequisites, or exclusions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_talent_pipelineAInspect
Find available talent from recent layoffs, cross-referenced with LCA visa roles.
Shows what occupations/skills each laid-off company was hiring for.
Useful for recruiters targeting skilled workers from recently laid-off companies.
Requires Starter tier or higher. Get your API key at warnfirehose.com/account
Args:
state: Optional 2-letter state code
days: Look back this many days (default 90)
limit: Max results (default 15, max 50)
api_key: Your WARN Firehose API key (Starter tier required)| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| limit | No | ||
| state | No | ||
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying authentication requirements ('Requires Starter tier or higher', 'API key'), but lacks details about rate limits, pagination behavior, error handling, or what the output contains. The description doesn't contradict any annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose first, usage context second, requirements third, and parameters last. Every sentence earns its place, though the API key mention appears twice (in requirements and parameters section), creating minor redundancy. Overall efficient for a 4-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no annotations, but with output schema), the description provides good coverage. It explains purpose, usage context, authentication requirements, and all parameters. Since an output schema exists, the description doesn't need to explain return values. The main gap is lack of behavioral details like rate limits or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing clear semantic meaning for all 4 parameters. It explains what 'state' represents (2-letter state code), what 'days' does (look back period with default), what 'limit' controls (results with default and max), and what 'api_key' is for (authentication with tier requirement).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Find available talent', 'Shows what occupations/skills') and resources ('from recent layoffs', 'cross-referenced with LCA visa roles'). It distinguishes itself from sibling tools like 'get_company_layoffs' or 'get_recent_layoffs' by focusing on talent pipeline analysis rather than raw layoff data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Useful for recruiters targeting skilled workers from recently laid-off companies'), but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools. The mention of 'Starter tier or higher' provides important prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_layoffsAInspect
Search WARN Act layoff notices by company name, city, or keyword.
Args:
query: Search term (company name, city, etc.)
state: Optional 2-letter state code to filter (e.g. CA, TX, NY)
limit: Max results to return (default 20, max 100)
api_key: Optional API key for higher rate limits| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| state | No | ||
| api_key | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions rate limits for the api_key parameter, which adds useful context, but lacks details on authentication requirements, error handling, pagination, or what the search returns (though output schema exists). It adequately describes the action but misses deeper behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by a structured Args section that efficiently documents parameters. Every sentence earns its place with no wasted words, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage and an output schema (which handles return values), the description does a good job covering input semantics and basic usage. However, it lacks context on authentication needs, error cases, or how results are structured, which could be important for a search tool with optional API keys.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains all 4 parameters: query (search term for company/city/keyword), state (optional 2-letter code filter), limit (default and max values), and api_key (optional for higher rate limits). This adds significant meaning beyond the bare schema, though it could elaborate on query syntax or state code validation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches WARN Act layoff notices by specific criteria (company name, city, or keyword), which is a specific verb+resource combination. It distinguishes itself from siblings like 'get_recent_layoffs' or 'get_company_layoffs' by emphasizing search functionality across multiple fields rather than predefined retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching layoff notices with optional filters, but does not explicitly state when to use this tool versus alternatives like 'get_recent_layoffs' or 'get_company_layoffs'. No guidance on exclusions or prerequisites is provided, leaving usage context somewhat ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!