Skip to main content
Glama

Server Details

Free UK SIC code lookup, GICS/ICB cross-classification mapping, and Companies House company search. Covers 731 SIC 2007 codes, 273 GICS entries, 249 ICB entries, 1,235 cross-classification mappings, and 5.6 million UK companies. No API key required.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, but lookup_sic_code and search_sic_codes both deal with SIC codes (lookup vs. search) and browse_classification_hierarchy overlaps partially with lookup_sic_code's hierarchy features, causing minor potential confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case, e.g., browse_classification_hierarchy, lookup_sic_code, search_uk_companies_by_industry. No mixing of conventions.

Tool Count5/5

6 tools is a well-scoped number for a specialized domain covering SIC codes, classification systems, and UK company lookups. Each tool has a clear role without redundancy.

Completeness4/5

Covers core operations: search, browse, lookup, convert, and company details. Minor gaps like bulk operations or code listing are missing but not critical for typical use cases.

Available Tools

8 tools
browse_classification_hierarchyA
Read-onlyIdempotent
Inspect

Browse the hierarchy tree of a classification system (UK SIC 2007, GICS, or ICB). Returns child entries at the next level. Omit parent_code to get top-level entries. Use this to explore what codes exist in each system.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemYesClassification system to browse: sic, gics, or icb
parent_codeNoParent code to get children of. Omit to get top-level entries (SIC sections, GICS sectors, ICB industries).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive. Description adds that it returns child entries at the next level, confirming behavior. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load key action and context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Missing output schema, and description only says 'returns child entries at the next level' without specifying format (e.g., codes, names). Adequate for a simple hierarchical browse, but could clarify return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description repeats parameter guidance from schema (omit parent_code for top-level), but adds no new semantic value beyond the schema's field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it browses the hierarchy tree of classification systems (UK SIC 2007, GICS, ICB) and returns child entries. It clearly distinguishes from siblings like convert, lookup, and search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidelines: omit parent_code for top-level entries, provide it for children. Says 'use this to explore what codes exist.' However, no explicit mention of when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_between_classificationsA
Read-onlyIdempotent
Inspect

Convert a classification code between UK SIC 2007, GICS (MSCI), and ICB (FTSE Russell) systems. Returns equivalent codes in the target system with confidence levels. The mapping covers all 618 SIC classes, 163 GICS sub-industries, and 173 ICB subsectors.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget classification system
codeYesThe classification code to convert (e.g. '6202' for SIC class, '45102030' for GICS sub-industry)
fromYesSource classification system
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds context about coverage (618 SIC classes, etc.) and mentions confidence levels, which is beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action and systems, and includes relevant coverage numbers. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with three parameters and no output schema, the description covers purpose, systems, and scope. It lacks detail on result format or confidence level meaning, but is adequate for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for all three parameters. The description does not add extra meaning beyond the schema; it lists the involved systems but not parameter details. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'convert' and the resource: classification codes between UK SIC 2007, GICS, and ICB systems. It specifies the output includes equivalent codes with confidence levels, and distinguishes from sibling tools like lookup or browse.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for converting codes but does not provide explicit guidance on when to use this tool versus siblings like lookup_sic_code or search_sic_codes. No when-to-use or when-not-to-use conditions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_industry_profileA
Read-onlyIdempotent
Inspect

Get an industry intelligence profile for a SIC, GICS, or ICB classification code. Returns company counts (active vs dissolved), top geographic locations, company type distribution, and cross-classification mappings. Useful for market sizing and sector analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesClassification code (e.g. '62020' for SIC, '45' for GICS sector, '1010' for ICB supersector)
systemNoClassification systemsic
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds context on the returned data composition but does not disclose additional behavioral traits such as data freshness or pagination. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with no wasted words. It efficiently conveys the function and output, though a slightly more structured format (e.g., bullet points) could improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with only two parameters and no output schema, the description covers the purpose, input, and output well. It is complete enough for an AI agent to use, though it omits mention of any rate limits or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The parameter schema is fully described (100% coverage), and the description adds only minor context. The baseline of 3 is appropriate as the description does not significantly augment the schema's information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves an industry profile for specific classification codes (SIC, GICS, ICB) and enumerates the returned data, distinguishing it from sibling tools like browse_classification_hierarchy or convert_between_classifications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it is useful for market sizing and sector analysis, implying a use case, but does not explicitly state when not to use it or provide direct alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_sic_codeA
Read-onlyIdempotent
Inspect

Look up a classification code in the UK SIC 2007, GICS (MSCI), or ICB (FTSE Russell) system. Returns the code name, hierarchy level, breadcrumb trail from root to code, child codes, and cross-classification mappings to other systems. Use this when you have a specific code and need its details.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesThe classification code to look up (e.g. '62020' for SIC, 'J' for SIC section, '45' for GICS sector, '1010' for ICB supersector)
systemNoWhich classification system the code belongs to: sic (UK SIC 2007), gics (MSCI Global Industry Classification Standard), or icb (FTSE Russell Industry Classification Benchmark)sic
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds context about the return data (code name, hierarchy, etc.) but does not disclose additional behavioral traits such as rate limits, authentication requirements, or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The first sentence specifies action and returns, the second provides usage guidance. Perfectly concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with two parameters and no output schema, the description adequately covers purpose, usage context, and return data. Minor gap: no mention of error handling or format of returned data, but for a straightforward lookup this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have full schema descriptions covering their meaning (code string and system enum with examples). The description reiterates these but adds no new semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it looks up a classification code in UK SIC 2007, GICS, or ICB systems and returns specific details (code name, hierarchy, breadcrumb, child codes, cross-mappings). It distinguishes from sibling tools by specifying 'Use this when you have a specific code and need its details.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool: when you have a specific code. It implies that for exploration without a specific code, one should use search_sic_codes or browse_classification_hierarchy. However, it does not explicitly exclude other scenarios or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_uk_companyA
Read-onlyIdempotent
Inspect

Look up a UK company registered at Companies House by name or registration number. Returns company details including name, status, type, incorporation date, registered address, and SIC codes with their full names and GICS/ICB mappings. Covers 5.6 million UK companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoCompany name to search for (e.g. 'Tesco', 'Rolls Royce')
company_numberNoCompanies House registration number (e.g. '00445790')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds value by detailing the returned data (name, status, type, incorporation date, registered address, SIC codes with mappings) and scale (5.6 million companies), which are beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with purpose and parameters, followed by return details and scale. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately covers return fields. Annotations cover safety, and parameters are fully documented. The scale and scope are mentioned, leaving no gaps for this simple lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions. The tool description merely reiterates 'by name or registration number' without adding new meaning. Baseline 3 applies as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Look up') and resource ('UK company registered at Companies House'), and specifies methods ('by name or registration number'). It effectively distinguishes from sibling tools like search_uk_companies_by_industry by focusing on direct lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for direct company lookup but lacks explicit guidance on when to use alternatives (e.g., for industry-based search, use sibling tools). No 'when not to use' or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesA
Read-onlyIdempotent
Inspect

Search UK companies with flexible filters. Combine name search, postcode, status, incorporation date range, SIC/GICS/ICB codes, accounts category, and company type. Returns enriched results with all SIC codes, GICS/ICB mappings, and address details. Cursor pagination for large result sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 50)
queryNoCompany name to search for (min 2 chars)
cursorNoPagination cursor from previous response
statusNoCompany status filter (default: all)all
icb_codeNoICB code filter (any level: 2-8 digit code)
postcodeNoPostcode prefix (e.g. 'SW1', 'EC2A')
sic_codeNoSIC code filter (any level: section letter, 2-5 digit code)
gics_codeNoGICS code filter (any level: 2-8 digit code)
company_typeNoCompany type filter (e.g. 'Private Limited Company', 'PLC', 'LLP')
accounts_categoryNoAccounts category filter (e.g. 'MICRO-ENTITY', 'SMALL', 'MEDIUM', 'DORMANT')
incorporated_afterNoISO date (e.g. '2020-01-01')
incorporated_beforeNoISO date (e.g. '2024-12-31')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark the tool as read-only, idempotent, and non-destructive. Description adds behavioral details: it returns enriched results with all SIC codes, GICS/ICB mappings, address details, and uses cursor pagination. This adds context beyond annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, followed by filter enumeration and return characteristics. Every sentence adds value; no redundancy or verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description mentions enriched results and cursor pagination, which covers key return aspects. It could be more explicit about all filters being optional and combinable, but overall sufficient for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so baseline is 3. Description lists most parameters but does not add significant meaning beyond the schema's own parameter descriptions. No additional semantic value provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it searches UK companies with flexible filters, listing specific filter types. It distinguishes from sibling tools by being the most comprehensive search option, offering filters for name, status, codes, and more.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies use when flexible filtering is needed, but does not explicitly state when not to use or mention alternatives. Given sibling tools like lookup_uk_company and search_uk_companies_by_industry, the description provides clear context but lacks direct exclusion or comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_sic_codesA
Read-onlyIdempotent
Inspect

Search for UK SIC 2007 codes by business activity description. Describe what a business does in plain English and get ranked SIC code recommendations with relevance scores, hierarchy breadcrumbs, and GICS/ICB cross-classification mappings. Useful for finding the right SIC code for Companies House registration.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (default 5, max 20)
queryYesBusiness activity description in plain English (e.g. 'online candle shop', 'software development', 'plumbing and heating', 'restaurant')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the agent knows it is safe. The description adds value by detailing the output: ranked recommendations with relevance scores, hierarchy breadcrumbs, and GICS/ICB cross-classification mappings, which go beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loads the core purpose, and provides essential details without any unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (two parameters, no enums, no output schema), the description is complete. It explains what the tool does, how to use it, and what the output includes, leaving no significant gaps for an agent to act.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters well. The description adds minimal new semantic information (e.g., 'plain English' for query), but does not significantly enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'search', the resource 'UK SIC 2007 codes', and the method 'by business activity description'. It distinguishes from sibling tools like lookup_sic_code and browse_classification_hierarchy by emphasizing natural language search for SIC codes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a clear use case: 'Useful for finding the right SIC code for Companies House registration.' While it doesn't explicitly say when not to use or name alternatives, the context is sufficient for the agent to infer appropriate usage compared to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_uk_companies_by_industryA
Read-onlyIdempotent
Inspect

Find UK companies registered under a specific SIC, GICS, or ICB classification code. Returns enriched company data including all SIC codes, GICS/ICB mappings, address, company type, and incorporation date. Supports filtering by postcode, date range, accounts category, and company type. Cursor pagination for large result sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesClassification code to search by (e.g. '62020' for IT consultancy, '45' for GICS Energy sector)
limitNoMaximum number of companies to return (default 20, max 50)
cursorNoPagination cursor from a previous response's next_cursor field
statusNoFilter by company status (default: active only)active
systemNoClassification system the code belongs tosic
postcodeNoPostcode prefix filter (e.g. 'SW1', 'EC2A', 'M1')
company_typeNoFilter by company type (e.g. 'Private Limited Company', 'PLC', 'LLP')
accounts_categoryNoFilter by accounts category (e.g. 'MICRO-ENTITY', 'SMALL', 'MEDIUM', 'DORMANT')
incorporated_afterNoISO date (e.g. '2020-01-01'). Only companies incorporated after this date.
incorporated_beforeNoISO date. Only companies incorporated before this date.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, openWorldHint=true, so the description's statement that it 'returns a list of companies with their details' adds minimal extra behavioral context. There is no contradiction, and the description does not elaborate on rate limits, pagination behavior, or other nuances beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first sentence states purpose, second describes output, third gives use cases. No filler; every sentence earns its place. Could be slightly more concise (e.g., combine second and third) but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately hints at return value ('list of companies with their details') and provides market research context. For a relatively simple search tool with good schema coverage and annotations, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (code, limit, status, system). The description mentions classification codes (SIC, GICS, ICB) and examples but does not add significant new meaning beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'find' and resource 'UK companies registered under a specific SIC, GICS, or ICB classification code.' It distinguishes from sibling tools (e.g., lookup_uk_company for individual lookups, browse_classification_hierarchy for hierarchy navigation) by focusing on industry-based company search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage contexts: 'market research, finding competitors, or understanding how many companies operate in a given industry.' However, it does not explicitly state when not to use this tool (e.g., for individual company lookup) or directly contrast with siblings, though the context is fairly clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources