Skip to main content
Glama

Server Details

UK SIC code lookup, GICS/ICB mapping, and Companies House search. 731 codes, 5.6M companies.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
jackmmaher/siccodes.co.uk
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, with some overlap between lookup and search functions (e.g., lookup_sic_code vs search_sic_codes, search_companies vs search_uk_companies_by_industry). Descriptions clarify differences, but an agent might occasionally pick the wrong one.

Naming Consistency5/5

All tool names follow a consistent verb_noun or verb_prep_noun pattern in snake_case. Verbs are descriptive and distinct, making the naming predictable and clear.

Tool Count5/5

With 8 tools, the server covers core operations for classification browsing, code lookup, conversion, industry profiling, and company search. No unnecessary tools, and the count is appropriate for the scope.

Completeness5/5

The tool set provides comprehensive coverage for exploring classifications, converting between systems, getting industry profiles, and searching/looking up companies. No obvious gaps in the read-only data domain.

Available Tools

8 tools
browse_classification_hierarchyA
Read-onlyIdempotent
Inspect

Browse the hierarchy tree of a classification system (UK SIC 2007, GICS, or ICB). Returns child entries at the next level. Omit parent_code to get top-level entries. Use this to explore what codes exist in each system.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemYesClassification system to browse: sic, gics, or icb
parent_codeNoParent code to get children of. Omit to get top-level entries (SIC sections, GICS sectors, ICB industries).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint, so the description adds value by stating it returns 'child entries at the next level' and explains the effect of omitting parent_code. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with key information. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple browse tool with two parameters and strong annotations, the description adequately covers purpose, usage, and parameter behavior. It doesn't discuss return format or errors, but that's acceptable given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and both parameters are well-described in the schema. The description repeats some schema info but adds the note about 'top-level entries' and a general usage tip. Given high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'browse' and the resource 'hierarchy tree', lists specific classification systems, and distinguishes from siblings like 'lookup_sic_code' or 'convert_between_classifications' by emphasizing exploration of codes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit usage context: 'Use this to explore what codes exist in each system.' It also explains how to get top-level entries vs children. While it doesn't explicitly list when not to use, the purpose is clear enough to differentiate from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_between_classificationsA
Read-onlyIdempotent
Inspect

Convert a classification code between UK SIC 2007, GICS (MSCI), and ICB (FTSE Russell) systems. Returns equivalent codes in the target system with confidence levels. The mapping covers all 618 SIC classes, 163 GICS sub-industries, and 173 ICB subsectors.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget classification system
codeYesThe classification code to convert (e.g. '6202' for SIC class, '45102030' for GICS sub-industry)
fromYesSource classification system
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint. The description adds that it returns equivalent codes with confidence levels, providing behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with front-loaded action and no wasted words. Highly efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the systems and coverage. Could mention output format or limitations, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by detailing the mapping coverage (e.g., '618 SIC classes'), which aids the agent in understanding the tool's scope.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts classification codes between UK SIC 2007, GICS, and ICB systems, and specifies coverage counts, which distinguishes it from sibling lookup tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It implies usage when converting between classification systems, but does not explicitly state when not to use it or mention alternatives. Siblings like lookup_sic_code serve different purposes, but the description lacks direct guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_industry_profileA
Read-onlyIdempotent
Inspect

Get an industry intelligence profile for a SIC, GICS, or ICB classification code. Returns company counts (active vs dissolved), top geographic locations, company type distribution, and cross-classification mappings. Useful for market sizing and sector analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesClassification code (e.g. '62020' for SIC, '45' for GICS sector, '1010' for ICB supersector)
systemNoClassification systemsic
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint true and destructiveHint false. Description adds value by detailing the returned data (company counts, locations, etc.), which is beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: first defines action and inputs, second lists outputs, third suggests use case. No redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with 2 simple parameters and no output schema, the description adequately covers inputs, outputs, and purpose. Could mention error handling or code validation, but not necessary for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds meaningful examples (e.g., '62020' for SIC, '45' for GICS), providing context beyond the schema. Baseline 3 increased for added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves an industry intelligence profile for classification codes (SIC, GICS, ICB). Lists specific return fields: company counts, top locations, distribution, cross-classification mappings. Distinguishes from sibling tools like browse_classification_hierarchy and lookup_sic_code.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions it is useful for market sizing and sector analysis, implying context. However, it does not explicitly state when to use this tool over alternatives (e.g., browse_classification_hierarchy for hierarchy, lookup_sic_code for single code lookup). No when-not or exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_sic_codeA
Read-onlyIdempotent
Inspect

Look up a classification code in the UK SIC 2007, GICS (MSCI), or ICB (FTSE Russell) system. Returns the code name, hierarchy level, breadcrumb trail from root to code, child codes, and cross-classification mappings to other systems. Use this when you have a specific code and need its details.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesThe classification code to look up (e.g. '62020' for SIC, 'J' for SIC section, '45' for GICS sector, '1010' for ICB supersector)
systemNoWhich classification system the code belongs to: sic (UK SIC 2007), gics (MSCI Global Industry Classification Standard), or icb (FTSE Russell Industry Classification Benchmark)sic
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, indicating a safe read operation. The description adds value by detailing the return structure (code name, hierarchy, child codes, cross-classification mappings), which is not in annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences with no extraneous information. The first sentence encapsulates purpose and outputs, the second provides usage guidance. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently describes the return fields. Annotations cover safety. Sibling tools handle browsing and searching. The description could mention potential response size or error cases, but overall it is complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for both parameters: 'code' includes example formats, 'system' is an enum with default and explanation. The description doesn't add additional parameter semantics beyond what the schema provides, hence baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Look up' and resource 'classification code' and specifies the three classification systems. It lists the specific return fields (code name, hierarchy level, breadcrumb, child codes, cross-classification mappings). This distinguishes it from sibling tools like 'browse_classification_hierarchy' and 'search_sic_codes'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool: 'Use this when you have a specific code and need its details.' This implies not to use it for browsing or searching without a code. While it doesn't name alternative tools, the sibling names provide context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_uk_companyA
Read-onlyIdempotent
Inspect

Look up a UK company registered at Companies House by name or registration number. Returns company details including name, status, type, incorporation date, registered address, and SIC codes with their full names and GICS/ICB mappings. Covers 5.6 million UK companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoCompany name to search for (e.g. 'Tesco', 'Rolls Royce')
company_numberNoCompanies House registration number (e.g. '00445790')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare non-destructive, read-only, idempotent behavior. The description adds valuable context about coverage (5.6 million companies) and specific output fields (SIC codes with full names and mappings), enhancing transparency beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core action and inputs, no superfluous words. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a lookup tool with no output schema, the description thoroughly lists returned fields (name, status, type, incorporation date, address, SIC codes with mappings) and mentions coverage, providing complete context for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters well-described. The description adds confirmation that query is for name and company_number is registration number, but does not significantly expand on the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up a UK company by name or registration number and returns specific details including SIC codes with mappings, distinguishing it from sibling tools like search_companies which likely return less detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for looking up company details but does not explicitly differentiate from siblings such as search_companies or search_uk_companies_by_industry, leaving the agent to infer when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesA
Read-onlyIdempotent
Inspect

Search UK companies with flexible filters. Combine name search, postcode, status, incorporation date range, SIC/GICS/ICB codes, accounts category, and company type. Returns enriched results with all SIC codes, GICS/ICB mappings, and address details. Cursor pagination for large result sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 50)
queryNoCompany name to search for (min 2 chars)
cursorNoPagination cursor from previous response
statusNoCompany status filter (default: all)all
icb_codeNoICB code filter (any level: 2-8 digit code)
postcodeNoPostcode prefix (e.g. 'SW1', 'EC2A')
sic_codeNoSIC code filter (any level: section letter, 2-5 digit code)
gics_codeNoGICS code filter (any level: 2-8 digit code)
company_typeNoCompany type filter (e.g. 'Private Limited Company', 'PLC', 'LLP')
accounts_categoryNoAccounts category filter (e.g. 'MICRO-ENTITY', 'SMALL', 'MEDIUM', 'DORMANT')
incorporated_afterNoISO date (e.g. '2020-01-01')
incorporated_beforeNoISO date (e.g. '2024-12-31')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent behavior. Description adds value by disclosing enriched results (SIC/GICS/ICB mappings, address details) and cursor pagination, providing full transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, no wasted words. Efficiently covers filters, return type, and pagination.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 12 optional parameters and full schema coverage, the description explains return content and pagination. Annotations cover safety. No critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so descriptions for each parameter are already present. The tool description summarizes filter categories but does not add new semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Search UK companies with flexible filters' and lists specific filter types, distinguishing it from siblings like lookup_uk_company (single company) and search_uk_companies_by_industry (industry-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use for broad, multi-filter searches but does not explicitly contrast with sibling tools like lookup_uk_company or search_uk_companies_by_industry. The context of siblings provides implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_sic_codesA
Read-onlyIdempotent
Inspect

Search for UK SIC 2007 codes by business activity description. Describe what a business does in plain English and get ranked SIC code recommendations with relevance scores, hierarchy breadcrumbs, and GICS/ICB cross-classification mappings. Useful for finding the right SIC code for Companies House registration.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (default 5, max 20)
queryYesBusiness activity description in plain English (e.g. 'online candle shop', 'software development', 'plumbing and heating', 'restaurant')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds value beyond annotations by detailing output structure (ranked recommendations, relevance scores, hierarchy breadcrumbs, GICS/ICB mappings) and purpose. Annotations already indicate read-only, idempotent, non-destructive; description complements with behavioral specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and action, no redundant information. Every word is informative and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description fully addresses the tool's function and output despite no output schema. It covers what the tool returns (ranked recommendations, scores, breadcrumbs, mappings) and its use case. Sibling tools provide context for alternatives.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and both parameters have descriptions. Tool description does not add additional meaning beyond schema; baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Search for UK SIC 2007 codes by business activity description') and resource (SIC codes) with specific details on output (ranked recommendations, relevance scores, hierarchy breadcrumbs, cross-classifications). Differentiates from siblings like lookup_sic_code and browse_classification_hierarchy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates usage context ('useful for finding the right SIC code for Companies House registration') and implies when to use (from a business description). Does not explicitly state when not to use or name alternatives, but context from sibling tools provides sufficient differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_uk_companies_by_industryA
Read-onlyIdempotent
Inspect

Find UK companies registered under a specific SIC, GICS, or ICB classification code. Returns enriched company data including all SIC codes, GICS/ICB mappings, address, company type, and incorporation date. Supports filtering by postcode, date range, accounts category, and company type. Cursor pagination for large result sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesClassification code to search by (e.g. '62020' for IT consultancy, '45' for GICS Energy sector)
limitNoMaximum number of companies to return (default 20, max 50)
cursorNoPagination cursor from a previous response's next_cursor field
statusNoFilter by company status (default: active only)active
systemNoClassification system the code belongs tosic
postcodeNoPostcode prefix filter (e.g. 'SW1', 'EC2A', 'M1')
company_typeNoFilter by company type (e.g. 'Private Limited Company', 'PLC', 'LLP')
accounts_categoryNoFilter by accounts category (e.g. 'MICRO-ENTITY', 'SMALL', 'MEDIUM', 'DORMANT')
incorporated_afterNoISO date (e.g. '2020-01-01'). Only companies incorporated after this date.
incorporated_beforeNoISO date. Only companies incorporated before this date.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds significant value by detailing enriched return data (SIC codes, GICS/ICB mappings, etc.), filtering options, and cursor pagination, providing behavioral insights beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three focused sentences: purpose, returned data, and filtering/pagination. No wasted words, front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters and no output schema, the description covers the main purpose, return data types, and key features (filtering, pagination). It lacks output structure details but is adequate for an agent to understand the tool's capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions (e.g., code examples). The tool description provides a high-level context but does not add new meaning to individual parameters beyond the schema, earning the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds UK companies by SIC, GICS, or ICB code, listing returned data such as codes, address, and incorporation date. This distinguishes it from sibling tools like lookup_uk_company (by company number) or search_companies (general search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly specifies when to use: when searching by industry classification code. It does not explicitly exclude alternatives or provide when-not guidance, but the context is clear. No exclusions are stated, so it falls short of a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.