NexusFeed: ABC License Compliance
Server Details
Real-time US state ABC liquor license compliance records for AI agents. Covers CA, TX, NY, and FL — extracted from state portals and served as normalized JSON. Every response includes a _verifiability block with extraction timestamp, confidence score, and source URL.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 3 of 3 tools scored.
The three tools have distinct, non-overlapping purposes: listing supported jurisdictions (discovery), searching by business/address (fuzzy finding), and looking up by license number (exact verification). An agent can easily select the correct tool based on whether it needs coverage metadata, discovery search, or point-in-time validation.
All tools follow the consistent abc_verb_noun pattern in snake_case. The verb choices are precise (list, lookup, search) and semantically appropriate to their functions, with clear singular/plural distinctions (state vs licenses) that signal return cardinality.
Three tools is exactly appropriate for this narrow, focused domain of read-only license compliance verification. The set covers the essential workflow (discover coverage → search → verify specific) without bloat, fitting perfectly within the ideal 3-15 tool range for a well-scoped MCP server.
The toolset covers the core read lifecycle for compliance checks (listing states, searching, and retrieving specific records). However, it lacks bulk verification capabilities or monitoring tools for ongoing compliance tracking, which are common enterprise needs. For pure point-in-time verification, the surface is nearly complete.
Available Tools
3 toolsabc_list_statesAInspect
Returns metadata for all US states currently supported by the ABC License API, including the agency name, data freshness SLA, extraction method, and whether CAPTCHA is present. Use this first when building a multi-state compliance workflow to understand coverage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses specific operational characteristics of the returned data (data freshness SLA, extraction method, CAPTCHA presence). However, it omits runtime constraints like rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with no redundancy. First sentence front-loads the action and specific return fields; second sentence provides workflow context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (no parameters) and existence of an output schema, the description provides optimal completeness by previewing the specific metadata fields (agency name, CAPTCHA status) an agent can expect, without duplicating structural return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, warranting the baseline score of 4. The description appropriately does not invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Returns') and resources ('metadata for all US states'), clearly distinguishing it from the sibling lookup/search tools which operate on individual licenses. It specifies the scope (ABC License API) and enumerates specific metadata fields returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Use this first when building a multi-state compliance workflow'), positioning it as a discovery prerequisite to the sibling license tools. Lacks explicit 'when not to use' exclusions, though 'understand coverage' implies it's not for detailed license queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
abc_lookup_licenseAInspect
Looks up a specific liquor license by its state-issued license number and returns the full current record including status, expiration, address, conditions, and suspension history. Faster and more precise than abc_search_licenses when you already have the license number. Use this for point-in-time verification (e.g., 'Is license CA-20-621547 currently ACTIVE?'). The _verifiability block contains the exact source URL — agents can independently verify the result by fetching that URL.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | ||
| license_number | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses detailed return structure (status, expiration, address, conditions, suspension history) and performance characteristic ('Faster'). Uniquely documents the _verifiability block for independent verification. Lacks explicit read-only declaration or auth/rate limit details, though 'lookup' implies safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently structured: (1) core function and return payload, (2) sibling differentiation and performance, (3) specific use case example, (4) verifiability feature. No redundant text; every sentence adds value beyond structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters with 0% schema coverage and no annotations, description adequately covers tool purpose, return value summary, sibling distinctions, and special features (_verifiability). Minor gap: 'state' parameter remains undocumented. Output schema exists, so detailed return value explanation is bonus rather than requirement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring description to compensate. Description provides context for license_number ('state-issued', format example CA-20-621547) but fails to explain the separate 'state' parameter semantics (format: abbreviation? full name? required values?). Partial compensation for critical parameter documentation gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'looks up' with clear resource 'liquor license' and distinguishes from sibling abc_search_licenses by stating it's for when you 'already have the license number.' Explicitly mentions return content (status, expiration, conditions, suspension history).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly compares to sibling tool: 'Faster and more precise than abc_search_licenses when you already have the license number.' Provides concrete when-to-use guidance: 'Use this for point-in-time verification' with specific example (checking if CA-20-621547 is ACTIVE).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
abc_search_licensesAInspect
Searches a US state ABC (Alcoholic Beverage Control) board database for liquor licenses matching a business name, owner name, or address. Returns license type, current status (ACTIVE / SUSPENDED / EXPIRED / REVOKED), expiration date, and any suspension history. Use this before approving a distributor order, binding an insurance policy, or onboarding a merchant to verify they hold a valid liquor license. Supports CA, TX, NY, and FL (TX requires TWOCAPTCHA_API_KEY configured server-side; NY uses NY Open Data API — active licenses only; FL searches the DBPR licensing portal across all board types). Always check the _verifiability block: extraction_confidence >= 0.90 and source_timestamp within data_freshness_ttl_seconds are required for compliance decisions. Note: city, county, zip, and license_status filters are accepted but not yet applied server-side — results may need post-filtering.
| Name | Required | Description | Default |
|---|---|---|---|
| zip | No | ||
| city | No | ||
| state | Yes | ||
| county | No | ||
| address | No | ||
| owner_name | No | ||
| trade_name | No | ||
| license_status | No | ||
| include_inactive | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Exceptional disclosure given zero annotations. Reveals critical behavioral traits: state support limitations (CA, TX, NY, FL only), API dependencies (TX requires TWOCAPTCHA_API_KEY, NY uses Open Data API), data scope caveats (NY active-only, FL searches across all board types), compliance thresholds (_verifiability block requirements), and operational limitations (filters accepted but not applied server-side).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six information-dense sentences covering purpose, return values, use cases, state-specific API requirements, compliance thresholds, and filter limitations. No filler text, though the state-specific technical details (TWOCAPTCHA_API_KEY, DBPR portal) make it lengthy—justified given the compliance-critical nature of the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage for a complex 9-parameter tool with no annotations. Addresses multi-state regulatory variations, API authentication requirements, data freshness/compliance constraints, and response interpretation. Output schema exists, so detailed return value documentation in description is unnecessary though key fields are mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by mapping search criteria to parameters (business name → trade_name, owner name → owner_name, address → address). It explicitly warns that city, county, zip, and license_status filters are accepted but not server-side filtered. Minor gap: include_inactive boolean is not mentioned despite the active/inactive license context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Searches a US state ABC (Alcoholic Beverage Control) board database for liquor licenses' with specific match criteria (business name, owner name, or address). It distinguishes from sibling abc_lookup_license by emphasizing the search/matching functionality versus specific license lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'before approving a distributor order, binding an insurance policy, or onboarding a merchant.' Also specifies compliance requirements (extraction_confidence >= 0.90). Does not explicitly mention abc_lookup_license as an alternative for specific license IDs, but use cases are clearly delineated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!