Skip to main content
Glama
Noquarter6

contractor-license-mcp-server

clv_search_by_name

Read-onlyIdempotent

Search for licensed contractors by business or individual name in US state databases to verify license numbers, status, and trade qualifications.

Instructions

Search for contractors by business or individual name in a state licensing database. Returns matching contractors with license numbers, status, and confidence scores.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
stateYesTwo-letter US state code (e.g. 'CA', 'TX'). Use clv_list_supported_states to see available states.
nameYesBusiness or individual name to search for in the state licensing database.
tradeNoThe trade/contractor type to filter by (e.g. 'General Contractor', 'Electrical'). Use clv_list_supported_states to see valid values per state.general
limitNoMaximum number of results to return.
response_formatNoResponse format.markdown
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, indicating a safe, non-mutating, repeatable operation. The description adds valuable context beyond this by specifying the database source ('state licensing database') and detailing the return content ('matching contractors with license numbers, status, and confidence scores'), which helps the agent understand the tool's behavior and output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose and efficiently lists the return values. There is no wasted verbiage, and every part of the sentence contributes essential information, making it highly concise and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a search operation with 5 parameters), rich annotations (covering safety and idempotency), and 100% schema coverage, the description is largely complete. It specifies the database source and return content, though without an output schema, it could benefit from more detail on result structure (e.g., pagination or error handling). However, it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description mentions 'business or individual name' and 'state licensing database', which aligns with the schema but doesn't add significant semantic detail beyond it. The baseline score of 3 is appropriate as the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for contractors'), the resource ('in a state licensing database'), and the scope ('by business or individual name'). It distinguishes from siblings like clv_verify_license (which verifies specific licenses) and clv_list_supported_states (which lists states), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'in a state licensing database' and listing what it returns, but it doesn't explicitly state when to use this tool versus alternatives like clv_batch_verify or clv_verify_license. The input schema references clv_list_supported_states for valid values, providing some guidance, but the description itself lacks explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Noquarter6/contractor-license-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server