bizintel
Server Details
Local business intel for AI agents: audits, lead scoring, tech stack, prospecting.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- bch1212/mcp-bizintel
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 8 of 8 tools scored.
Each tool addresses a distinct function: auditing, searching, lead scoring, or details. No overlapping purposes.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., audit_website, search_businesses).
8 tools is appropriate for a business intelligence server; not too sparse or overloaded.
Covers core workflows (search, audit, details, scoring) but lacks data export or integration tools, a minor gap.
Available Tools
8 toolsaudit_websiteBInspect
Score a website 0-100 across SSL, mobile-readiness, speed, contact, booking.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions scoring 0-100 but does not disclose behavior for unreachable URLs, error handling, or whether the operation is read-only. Minimal transparency beyond the scoring range.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, no wasted words. Front-loaded with the action and result range. Could be slightly more structured but efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists (unseen but noted), the description need not explain returns. However, missing behavioral details like error conditions and the fact that it only handles single URLs limit completeness for a tool with one parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and the description does not mention the required 'url' parameter beyond implicit context. No details on URL format, protocols, or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool scores a website 0-100 across specific dimensions (SSL, mobile-readiness, speed, contact, booking), providing a specific verb and resource that distinguishes it from siblings like bulk_audit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives like bulk_audit or get_business_details. The context implies single-website analysis, but lacks when-not-to-use or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulk_auditBInspect
Audit up to 20 URLs concurrently. Results sorted worst-first (best leads first).
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses concurrency and sorting but fails to mention if it's read-only, required permissions, or potential rate limiting. Audit implies read, but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Essential information conveyed efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with an output schema, the description is minimal but covers the key function and sorting. Lacks behavioral context like rate limits or authentication, which may be needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'urls' has no schema description (0% coverage). The description adds meaning by specifying the array as URLs and capping at 20 concurrent, but does not explain format or constraints beyond that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it audits URLs concurrently and sorts results worst-first, with a maximum of 20 URLs. However, it does not explicitly differentiate from sibling tool 'audit_website', which likely audits a single URL.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'audit_website'. The description mentions concurrency limits but offers no when-to-use or when-not-to-use context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_no_bookingCInspect
Businesses with a website but no online booking system. Big opportunity for SaaS pitches.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| limit | No | ||
| niche | Yes | ||
| state | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral traits. It only states the output purpose, not how the tool operates, any side effects, or data freshness. Minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short with one sentence, but it omits essential details. While concise, it is under-specified for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (not shown), the description lacks parameter information and usage guidance. For a tool with 4 parameters and no annotations, the description is insufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description adds no meaning to any of the four parameters (city, niche, limit, state). No explanation of what these parameters represent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds businesses with a website but no online booking system, targeting a specific use case for SaaS pitches. It distinguishes from siblings like find_no_website and search_businesses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied through the description (finding leads for booking SaaS), but there is no explicit guidance on when not to use or alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_no_websiteBInspect
Find businesses in a niche with NO website at all — hottest cold-outreach leads.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| limit | No | ||
| niche | Yes | ||
| state | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden for behavioral disclosure. It states the tool finds businesses without websites, but omits details about data sources, result limits, pagination, or accuracy. The purpose is clear but behavioral constraints are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no redundancy. However, a bit more structure or elaboration on parameters would improve usability without significant bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description does not hint at return fields or result structure. For a tool with 4 parameters and no annotations, it lacks essential context about inputs and outputs, making it incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must explain parameters. It only indirectly references 'niche' (through 'in a niche'), but does not mention 'city', 'state', or 'limit'. No guidance on format, meaning, or defaults beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action 'find businesses', the resource 'businesses with no website', and the use case 'hottest cold-outreach leads'. It distinguishes from sibling tools like 'search_businesses' and 'find_no_booking' by specifying the unique filter of 'no website'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for finding leads without a website for cold outreach, but it does not explicitly state when to use it versus alternatives like 'search_businesses' or 'find_no_booking'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_business_detailsAInspect
Resolve a single business to phone, address, website, hours, rating.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| business_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states the output fields (phone, address, etc.) but does not disclose edge cases (e.g., business not found, duplicate matches) or any behavioral traits like rate limits or required permissions. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 12 words, very concise, and front-loaded with the core purpose. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with two parameters, the description covers the main output fields. It lacks error handling or behavior on missing data, but given the presence of an output schema (not provided) and the simple nature, it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description adds little beyond the schema. It mentions 'business_name' and 'city' implicitly but provides no formatting guidance, constraints, or examples. For a tool with two required string parameters, this is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: resolving a single business to its phone, address, website, hours, and rating. The verb 'resolve' and the list of output fields make it distinct from sibling tools like search_businesses (find multiple) or get_tech_stack (tech details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a specific business name and city, but it does not explicitly state when to use this versus alternatives like search_businesses or when not to use it. No exclusions or alternative tool names are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tech_stackCInspect
CMS, booking platform, email provider, analytics detected on a target URL.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral aspects like read-only nature, potential rate limits, or error handling. The description only lists what it detects, not how it behaves or what happens on failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single incomplete sentence (fragment). While concise, it lacks proper structure and could be improved by being a full sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple input schema and existence of an output schema, the description is minimally adequate. However, it does not clarify the scope of detection or potential limitations, which would be helpful for context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema coverage is 0%, meaning the description does not explain the 'url' parameter beyond implying it is a target URL. It adds little value over the schema, which already says it's a string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool detects CMS, booking platform, email provider, and analytics on a target URL. It distinguishes from siblings like 'audit_website' which likely cover broader analysis. However, it uses a passive construction ('detected') rather than an active verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus its siblings. For example, it does not clarify that this tool is for tech stack detection only, while 'audit_website' might be for a full website audit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
score_leadCInspect
Composite 0-100 lead score combining audit + demand signal.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| niche | No | ||
| business_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It mentions the output is a score from 0-100 combining audit and demand signal, but does not state whether it is read-only, destructive, or requires authentication. Essential behavioral information is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the key information: 'Composite 0-100 lead score combining audit + demand signal.' Every word serves a purpose, with no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters with 0% coverage and no annotations, the description is too brief. It does not explain how the score is calculated, how parameters affect output, or provide enough context for an AI agent to use it correctly. The presence of an output schema slightly mitigates, but overall completeness is low.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the input schema lacks descriptions. The tool description does not add any meaning to the parameters (city, niche, business_name) beyond their names. It fails to compensate for the schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Composite 0-100 lead score combining audit + demand signal.' This is a specific verb (score) and resource (lead), and it distinguishes from sibling tools like audit_website (audit), find_no_booking (find), and search_businesses (search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives. It lacks explicit context about prerequisites, when to avoid, or which specific scenarios are appropriate. The purpose is implied but not elaborated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_businessesAInspect
Find local businesses by niche + city. Yelp-backed; OSM fallback if Yelp not configured.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| limit | No | ||
| niche | Yes | ||
| state | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes fallback behavior (Yelp → OSM) indicating dependency on external services. Lacks details on rate limits, authentication, or result size implications, but given no annotations, provides basic transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with purpose first, then data source detail. No filler words, every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists and no annotations, description covers core functionality and data sources. Minor gaps (parameter details) but contextually adequate for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description mentions two parameters (niche, city) but does not explain their semantics (e.g., 'niche' meaning business category) nor mentions limit or state. Adds some value but incomplete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies verb 'find', resource 'local businesses', and key parameters 'niche + city'. Clearly distinguishes from sibling tools like get_business_details which is for individual business info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions data source preferences (Yelp primary, OSM fallback) which hints at configuration requirements but does not explicitly state when to use or avoid this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!