Skip to main content
Glama

NexusFeed: ABC License Compliance

Ownership verified

Server Details

Real-time US state ABC liquor license compliance records for AI agents. Covers CA, TX, NY, and FL — extracted from state portals and served as normalized JSON. Every response includes a _verifiability block with extraction timestamp, confidence score, and source URL.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

The three tools have distinct, non-overlapping purposes: listing supported jurisdictions (discovery), searching by business/address (fuzzy finding), and looking up by license number (exact verification). An agent can easily select the correct tool based on whether it needs coverage metadata, discovery search, or point-in-time validation.

Naming Consistency5/5

All tools follow the consistent abc_verb_noun pattern in snake_case. The verb choices are precise (list, lookup, search) and semantically appropriate to their functions, with clear singular/plural distinctions (state vs licenses) that signal return cardinality.

Tool Count5/5

Three tools is exactly appropriate for this narrow, focused domain of read-only license compliance verification. The set covers the essential workflow (discover coverage → search → verify specific) without bloat, fitting perfectly within the ideal 3-15 tool range for a well-scoped MCP server.

Completeness4/5

The toolset covers the core read lifecycle for compliance checks (listing states, searching, and retrieving specific records). However, it lacks bulk verification capabilities or monitoring tools for ongoing compliance tracking, which are common enterprise needs. For pure point-in-time verification, the surface is nearly complete.

Available Tools

3 tools
abc_list_statesAInspect

Returns metadata for all US states currently supported by the ABC License API, including the agency name, data freshness SLA, extraction method, and whether CAPTCHA is present. Use this first when building a multi-state compliance workflow to understand coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses specific operational characteristics of the returned data (data freshness SLA, extraction method, CAPTCHA presence). However, it omits runtime constraints like rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with no redundancy. First sentence front-loads the action and specific return fields; second sentence provides workflow context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (no parameters) and existence of an output schema, the description provides optimal completeness by previewing the specific metadata fields (agency name, CAPTCHA status) an agent can expect, without duplicating structural return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, warranting the baseline score of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Returns') and resources ('metadata for all US states'), clearly distinguishing it from the sibling lookup/search tools which operate on individual licenses. It specifies the scope (ABC License API) and enumerates specific metadata fields returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Use this first when building a multi-state compliance workflow'), positioning it as a discovery prerequisite to the sibling license tools. Lacks explicit 'when not to use' exclusions, though 'understand coverage' implies it's not for detailed license queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

abc_lookup_licenseAInspect

Looks up a specific liquor license by its state-issued license number and returns the full current record including status, expiration, address, conditions, and suspension history. Faster and more precise than abc_search_licenses when you already have the license number. Use this for point-in-time verification (e.g., 'Is license CA-20-621547 currently ACTIVE?'). The _verifiability block contains the exact source URL — agents can independently verify the result by fetching that URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYes
license_numberYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses detailed return structure (status, expiration, address, conditions, suspension history) and performance characteristic ('Faster'). Uniquely documents the _verifiability block for independent verification. Lacks explicit read-only declaration or auth/rate limit details, though 'lookup' implies safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently structured: (1) core function and return payload, (2) sibling differentiation and performance, (3) specific use case example, (4) verifiability feature. No redundant text; every sentence adds value beyond structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with 0% schema coverage and no annotations, description adequately covers tool purpose, return value summary, sibling distinctions, and special features (_verifiability). Minor gap: 'state' parameter remains undocumented. Output schema exists, so detailed return value explanation is bonus rather than requirement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, requiring description to compensate. Description provides context for license_number ('state-issued', format example CA-20-621547) but fails to explain the separate 'state' parameter semantics (format: abbreviation? full name? required values?). Partial compensation for critical parameter documentation gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'looks up' with clear resource 'liquor license' and distinguishes from sibling abc_search_licenses by stating it's for when you 'already have the license number.' Explicitly mentions return content (status, expiration, conditions, suspension history).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly compares to sibling tool: 'Faster and more precise than abc_search_licenses when you already have the license number.' Provides concrete when-to-use guidance: 'Use this for point-in-time verification' with specific example (checking if CA-20-621547 is ACTIVE).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

abc_search_licensesAInspect

Searches a US state ABC (Alcoholic Beverage Control) board database for liquor licenses matching a business name, owner name, or address. Returns license type, current status (ACTIVE / SUSPENDED / EXPIRED / REVOKED), expiration date, and any suspension history. Use this before approving a distributor order, binding an insurance policy, or onboarding a merchant to verify they hold a valid liquor license. Supports CA, TX, NY, and FL (TX requires TWOCAPTCHA_API_KEY configured server-side; NY uses NY Open Data API — active licenses only; FL searches the DBPR licensing portal across all board types). Always check the _verifiability block: extraction_confidence >= 0.90 and source_timestamp within data_freshness_ttl_seconds are required for compliance decisions. Note: city, county, zip, and license_status filters are accepted but not yet applied server-side — results may need post-filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
zipNo
cityNo
stateYes
countyNo
addressNo
owner_nameNo
trade_nameNo
license_statusNo
include_inactiveNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Exceptional disclosure given zero annotations. Reveals critical behavioral traits: state support limitations (CA, TX, NY, FL only), API dependencies (TX requires TWOCAPTCHA_API_KEY, NY uses Open Data API), data scope caveats (NY active-only, FL searches across all board types), compliance thresholds (_verifiability block requirements), and operational limitations (filters accepted but not applied server-side).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six information-dense sentences covering purpose, return values, use cases, state-specific API requirements, compliance thresholds, and filter limitations. No filler text, though the state-specific technical details (TWOCAPTCHA_API_KEY, DBPR portal) make it lengthy—justified given the compliance-critical nature of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage for a complex 9-parameter tool with no annotations. Addresses multi-state regulatory variations, API authentication requirements, data freshness/compliance constraints, and response interpretation. Output schema exists, so detailed return value documentation in description is unnecessary though key fields are mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by mapping search criteria to parameters (business name → trade_name, owner name → owner_name, address → address). It explicitly warns that city, county, zip, and license_status filters are accepted but not server-side filtered. Minor gap: include_inactive boolean is not mentioned despite the active/inactive license context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Searches a US state ABC (Alcoholic Beverage Control) board database for liquor licenses' with specific match criteria (business name, owner name, or address). It distinguishes from sibling abc_lookup_license by emphasizing the search/matching functionality versus specific license lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'before approving a distributor order, binding an insurance policy, or onboarding a merchant.' Also specifies compliance requirements (extraction_confidence >= 0.90). Does not explicitly mention abc_lookup_license as an alternative for specific license IDs, but use cases are clearly delineated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources