LocalPro
Server Details
Search thousands of verified US local service providers across trades like crawl space repair, floor coating, radon mitigation, and laundry services. Returns ratings, descriptions, services offered, pricing, and profile links. Every result is curated weekly with complete data.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 5 of 5 tools scored.
Each tool has a distinct, non-overlapping purpose: list_niches (discover categories), list_cities (discover locations), list_service_types (discover subcategories), search_providers (query/filter), and get_provider (retrieve specific details). The boundary between search_providers (plural, filtering) and get_provider (singular, ID-based) is clear and follows standard conventions.
All five tools follow a consistent verb_noun snake_case pattern: get_provider, list_cities, list_niches, list_service_types, search_providers. Verbs are used appropriately ('list' for enumeration, 'search' for querying, 'get' for specific retrieval) creating a predictable interface.
Five tools is an ideal count for this focused consumer discovery domain. The set includes three discovery aids (niches, cities, service types), one search interface, and one detail retrieval endpoint—covering the complete user journey without bloat or missing steps.
The tool surface covers the full discovery lifecycle: browsing categories/locations, filtering by service types, searching providers, and retrieving detailed profiles. Minor gaps exist (no dedicated review browsing separate from provider details, no bulk export), but core read-only workflows are fully supported.
Available Tools
5 toolsget_providerAInspect
Get a detailed summary of a specific verified service provider. Returns business description, services, pricing summary, coverage area, service details, and a link to the full profile page. With a valid API key (X-API-Key header): also returns full pricing breakdown and certifications. Without a key: returns pricing_summary and a premium_available flag. Contact details (phone, email, address) are available on the listing page via listing_url.
| Name | Required | Description | Default |
|---|---|---|---|
| niche_id | Yes | Niche ID (e.g. "coated-local"). Must match the niche used in search_providers. | |
| provider_slug | Yes | Provider URL slug from search_providers results (e.g. "abc-coatings") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and succeeds well: explains auth-gated behavior (API key unlocks full pricing/certifications vs limited summary), discloses that contact details are NOT returned but available via listing_url, and lists return fields. Missing only rate limits or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero waste. Front-loaded with core purpose (sentence 1), followed by conditional behavior (sentences 2-3), and data availability note (sentence 4). Appropriate length for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage given no output schema exists: describes all returned data fields, explains the conditional response structure based on authentication, and clarifies where to find additional data (listing_url). Only missing edge case handling for a complete picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with param descriptions. The description adds valuable semantic context beyond the schema: it explains that provider_slug comes from search_providers results and that niche_id must match the search niche, clarifying the data relationship and constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific verb ('Get') and resource ('detailed summary of a specific verified service provider'), distinguishing it from siblings like search_providers (list/search) and list_* tools (browse). The word 'specific' signals this retrieves one item by identifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies clear workflow by referencing 'provider_slug from search_providers results' and that niche_id 'must match the niche used in search_providers', indicating this tool follows search_providers. However, it lacks explicit 'when not to use' guidance or direct comparison to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_citiesAInspect
List available cities and metro areas where verified providers operate for a given niche. Use this to discover valid city slugs before calling search_providers. Cities are grouped by metro area where applicable (e.g. "minneapolis-mn" covers Minneapolis, St. Paul, and surrounding suburbs). Optionally filter by state abbreviation.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Two-letter state abbreviation to filter by (e.g. "MN", "CO") | |
| niche_id | Yes | Niche ID from list_niches (e.g. "coated-local", "radon-local") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses metro area grouping behavior ('e.g. minneapolis-mn covers Minneapolis, St. Paul...') and data quality context ('verified providers'). Lacks explicit read-only declaration or error behavior, but 'List' verb implies safe operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose declaration, usage guideline, and behavioral detail. Every sentence adds unique value beyond the schema. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter discovery tool with complete schema coverage and no output schema, the description adequately covers workflow (discover before search), data grouping logic, and parameter usage. Could improve by explicitly stating return format (list of slugs vs objects), but 'discover valid city slugs' provides sufficient hint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (niche_id and state fully documented). Description reinforces optionality ('Optionally filter by state') and required context ('for a given niche'), but schema already carries the semantic load. Baseline 3 appropriate when structured documentation is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('List') + resource ('cities and metro areas') + scope ('where verified providers operate for a given niche'). It clearly distinguishes this as a discovery tool for city slugs rather than provider search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to discover valid city slugs before calling search_providers.' This provides clear workflow guidance and directly references the sibling tool, establishing the correct sequence of operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_nichesAInspect
List all available service directories in the LocalPro network. This is the starting point for discovering what categories of verified local service providers are available. Categories include floor coating, radon mitigation, foundation repair, basement waterproofing, crawl space repair, mold/asbestos/lead remediation, septic services, commercial electrical, and laundry services. Returns niche IDs needed for all other tools.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds crucial information that it returns 'niche IDs' and provides examples of categories. However, does not specify return format (array vs object), pagination behavior, caching, or rate limits despite being a network operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose statement, usage guidance with workflow context, and return value/examples. Every sentence adds distinct value with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameterless discovery tool with no output schema, the description is complete. It explains what is returned (niche IDs), why it matters (prerequisite for other tools), and provides concrete examples of the domain categories covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, establishing baseline 4. No parameters require semantic explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with resource 'service directories' and scope 'in the LocalPro network'. Explicitly distinguishes from siblings by positioning as the 'starting point' and stating it provides IDs 'needed for all other tools', clarifying its role relative to get_provider and search_providers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use ('starting point for discovering') and implies workflow order (must be used before other tools that require niche IDs). Lacks explicit 'when not to use' or named alternative tools for specific scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_service_typesAInspect
List the valid service type categories for a given niche directory. Use this before calling search_providers with a service_type filter to ensure you pass a valid value. Each niche has its own taxonomy — for example, "coated-local" has epoxy, polyaspartic, metallic_epoxy, etc., while "radon-local" has radon_testing, radon_mitigation, ssd_installation, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| niche_id | Yes | Niche ID (e.g. "coated-local", "radon-local"). Get options from list_niches. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses that 'Each niche has its own taxonomy' and provides concrete examples of return values (epoxy, radon_testing, etc.), giving crucial context about the domain-specific data structure. Lacks only error handling or rate limit details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste: sentence 1 states purpose, sentence 2 provides usage guidelines, sentence 3 gives behavioral examples. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with no output schema, the description is remarkably complete. It explains the input source (implied via list_niches reference), the output structure (via taxonomy examples), and the downstream consumer (search_providers), covering the full workflow context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage for the niche_id parameter, the description adds valuable workflow semantics by implying the parameter comes from list_niches (referenced in schema) and reinforcing the domain meaning through the 'given niche directory' phrasing and concrete examples (coated-local, radon-local).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('List') and resource ('service type categories'), clearly defining scope ('for a given niche directory'). It effectively distinguishes from sibling tools by contrasting with list_niches (which lists niches) and explaining its role relative to search_providers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use this before calling search_providers with a service_type filter to ensure you pass a valid value.' This clearly states when to use the tool and its relationship to a specific sibling tool, preventing incorrect usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_providersAInspect
Search for verified local service providers across 9 trade categories including floor coating, radon mitigation, foundation repair, basement waterproofing, crawl space repair, mold/asbestos remediation, septic services, commercial electrical, and laundry services. Returns provider name, rating, services offered, certifications, years in business, and a link to the full profile with contact details. Covers major US metro areas. Use list_niches first to get valid niche IDs, and list_service_types for valid service_type values.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City or metro area slug (e.g. "denver-co", "minneapolis-mn"). Get options from list_cities. | |
| limit | No | Max results to return (default 10) | |
| niche_id | Yes | Niche ID (e.g. "coated-local", "radon-local"). Get options from list_niches. | |
| service_type | No | Service type slug to filter by (e.g. "epoxy", "radon_testing"). Get valid values from list_service_types. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return values ('Returns provider name, rating...'), geographic scope ('Covers major US metro areas'), and data quality ('verified' providers). However, it omits pagination details beyond the limit parameter and doesn't mention safety/idempotency characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with zero waste: opens with core function, lists categories for context, specifies return payload, notes geographic coverage, and closes with prerequisite workflow. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description compensates by detailing the return structure (fields returned). Given the rich input schema (100% coverage) and clear sibling dependencies documented, the description provides complete context for tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds semantic value by mapping niche_id to the 9 trade categories listed and explicitly linking service_type to list_service_types, helping the agent understand the relationship between parameters and sibling tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Search) + resource (verified local service providers) + exact scope (9 trade categories enumerated). Clearly distinguishes from sibling get_provider by emphasizing search/filtering across categories versus presumably retrieving a specific provider.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites: 'Use list_niches first to get valid niche IDs, and list_service_types for valid service_type values.' This establishes the exact workflow sequence and clearly differentiates when to use this tool versus its list_* siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!