discava – Business Directory for AI
Server Details
Search for local businesses worldwide. Structured data optimized for AI agents. • Search Millions of businesses over 49 countries (Europe, Northamerica, Southamerica, Asia, Oceania) • Quality & demand scoring for every business • Ranking based on real user click-through data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolsget_businessARead-onlyIdempotentInspect
Get complete details for one or more businesses including address, phone, website, opening hours, services, payment methods, social links, logo, business image, and coordinates. Pass comma-separated IDs for batch requests.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | One or more business IDs, comma-separated for batch (e.g. "id1,id2,id3"). ALWAYS use batch when fetching multiple businesses. | |
| format | No | Response format: "json" (default) or "html" for interactive cards |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, confirming the safe, repeatable nature of the operation. The description adds valuable behavioral context by enumerating the specific data fields returned (services, payment methods, coordinates, etc.), which compensates for the lack of an output schema. It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two high-density sentences. The first front-loads the core capability and enumerates return fields; the second provides the batching instruction. There is no redundant or filler text—every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description excellently compensates by listing all major data fields returned (address, phone, social links, coordinates, etc.). Combined with comprehensive annotations and a simple 2-parameter schema, the description provides sufficient context for an agent to understand both the input requirements and the expected data payload.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with detailed explanations and examples (e.g., 'e.g. "id1,id2,id3"'). The description mentions 'Pass comma-separated IDs for batch requests', reinforcing the batching concept, but adds no additional syntax, validation rules, or semantic details beyond what the schema already provides. Baseline 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('businesses'), explicitly scopes the operation ('complete details'), and lists specific data fields returned (address, phone, website, etc.). The mention of 'comma-separated IDs' distinguishes this from the sibling 'search_businesses', implying this tool requires specific identifiers rather than search queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear implicit guidance by mentioning 'Pass comma-separated IDs', indicating this tool is for when specific business IDs are already known. It also promotes efficient usage by highlighting batch request capability. However, it does not explicitly name 'search_businesses' as the alternative for when IDs are unknown, relying on the user to infer this from the tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rankingsARead-onlyIdempotentInspect
Get the most popular businesses in an area, ranked by real user click-through data. Use to find top-rated businesses by category.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City name to filter results | |
| lang | No | Language for category labels: "de", "en", "fr", "nl", "it", "es", "pt", "pl" | |
| limit | No | Number of results (1-20, default 10) | |
| country | Yes | ISO country code (required, e.g. "DE", "AT", "US") | |
| category | No | Category slug or name (e.g. "plumber", "restaurant", "Klempner") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations by specifying the ranking methodology ('real user click-through data'). However, misses opportunity to disclose return format, pagination behavior with the 1-20 limit, or error conditions despite annotations covering the safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines functionality and data source, second states use case. Front-loaded with essential information and appropriate length for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and comprehensive annotations covering safety/read-only aspects, the description adequately covers the tool's purpose and ranking methodology. Missing output details are acceptable since no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'area' (aligning with city/country) and 'category' which reinforces parameter usage, but adds no semantic details beyond schema descriptions (e.g., doesn't explain the lang parameter's purpose or ISO format for country).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get') and resource ('businesses'), with clear differentiating factor ('ranked by real user click-through data') that hints at distinction from sibling search_businesses. However, it doesn't explicitly differentiate from search_businesses or get_business.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Use to find top-rated businesses by category') suggesting when to use it, but lacks explicit when-not-to-use guidance or named alternatives. Doesn't clarify when to prefer this over search_businesses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkARead-onlyIdempotentInspect
Check if the discava API is online and responding. Returns status and version. No parameters required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, establishing the safety profile. The description adds valuable behavioral context by specifying that it 'Returns status and version,' which compensates for the missing output schema. It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences with no filler content. It is front-loaded with the core purpose ('Check if the discava API is online'), followed by return values and parameter requirements. Every sentence earns its place in the 15-word total.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity as a health check endpoint and the richness of annotations (covering safety and idempotency), the description provides complete context. It covers the purpose, return values (compensating for no output schema), and parameter requirements sufficiently for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score is 4 per the scoring rubric. The description confirms 'No parameters required,' which aligns with the empty input schema and its description. No additional parameter semantics are needed or provided beyond this confirmation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific verb 'Check' and resource 'discava API' with clear scope ('online and responding'). It effectively distinguishes from siblings like get_business and send_feedback by focusing on infrastructure health rather than business data operations. The mention of returning 'status and version' further clarifies the diagnostic nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context that this tool verifies API availability, which inherently signals when to use it (when checking connectivity). However, it lacks explicit guidance such as 'use this before calling other tools' or explicit contrast with sibling tools. The usage is clear from context but not explicitly prescribed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_businessesARead-onlyIdempotentInspect
Search for local businesses. Returns name, category, city, country, logo_url, available_details (what data exists), and scores. Check available_details to see what is available, then call get_business for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | Latitude for distance calculation | |
| lon | No | Longitude for distance calculation | |
| city | No | City name (e.g. "Hamburg", "Wien", "New York") | |
| lang | No | Language for category labels: "de", "en", "fr", "nl", "it", "es", "pt", "pl" | |
| page | No | Page number for pagination (default 1) | |
| limit | No | Results per page (1-50, default 10) | |
| query | No | Search query (e.g. "plumber", "Zahnarzt", "Italian restaurant") | |
| format | No | Response format: "json" (default) or "html" for interactive cards | |
| country | Yes | ISO country code (required, e.g. "DE", "AT", "US") | |
| category | No | Category slug (e.g. "plumber", "restaurant", "dentist") | |
| min_confidence | No | Minimum confidence score 0-100 to filter low-quality entries |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, safe, idempotent behavior. The description adds valuable behavioral context about data structure: it lists returned fields (name, category, scores, available_details) and explains the partial-data pattern where available_details indicates data completeness. This helps agents understand why they might need a second call.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero redundancy: sentence one states purpose, sentence two documents return values (compensating for missing output schema), and sentence three provides workflow guidance. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description comprehensively lists return fields (name, category, logo_url, available_details, scores) and explains the two-step workflow with get_business. Given the high schema coverage and clear behavioral annotations, the description provides complete context for tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across 11 parameters (lat/lon, city, query, category, etc.), the schema fully documents inputs. The description correctly focuses on behavior and workflow rather than repeating parameter details that are already comprehensively documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Search' and clear resource 'local businesses'. It explicitly distinguishes from sibling tool get_business by stating this returns summary data and directing users to 'call get_business for full details', establishing a clear functional boundary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Check available_details to see what is available, then call get_business for full details.' This clearly indicates when to use this tool (discovery/filtering) versus the sibling (detail retrieval), effectively defining the tool's place in a multi-step process.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_feedbackAInspect
Report data quality issues for a business. Use when you notice incorrect phone numbers, wrong addresses, outdated info, or closed businesses.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Type of feedback: POSITIVE (correct data), NEGATIVE (wrong data), NOT_FOUND (business gone), PHONE_INVALID, WEB_INVALID, HOURS_WRONG, DUPLICATE | |
| comment | No | Free text description of the issue or suggested correction | |
| business_id | Yes | Business ID to report about |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive write operation (readOnlyHint=false, destructiveHint=false). The description adds domain context that this is specifically for 'data quality issues,' but does not disclose behavioral details beyond annotations such as whether feedback is queued for review, if confirmation is sent, or rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second provides usage conditions. Information is front-loaded and every clause earns its place by conveying either the core action or specific triggering conditions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 3-parameter schema with complete coverage and no output schema, the description adequately covers the tool's scope. It appropriately references the feedback categories without needing to explain return values. Minor gap: does not clarify if feedback is immediate or batched, or user attribution requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by mapping real-world scenarios (phone numbers, addresses, closed businesses) to the abstract enum types in the schema, helping the agent understand which 'type' value to select for specific issues. It implies the business_id target without needing to repeat schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Report[s] data quality issues for a business' using a specific verb and resource. It effectively distinguishes from sibling retrieval tools (get_business, search_businesses) by specifying this is a reporting/feedback mechanism rather than a data retrieval operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit positive guidance ('Use when you notice...') with concrete examples (incorrect phone numbers, wrong addresses, closed businesses) that map to the enum values. However, it lacks explicit negative guidance or differentiation from the sibling 'suggest' tool which might also involve submitting business-related feedback.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggestARead-onlyIdempotentInspect
Autocomplete suggestions for cities or categories/business names. Use before searching to resolve ambiguous user input (e.g. "Mün" → "München").
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | "city" for city name suggestions, "query" for category and business name suggestions | |
| limit | No | Maximum number of suggestions (1-15, default 10) | |
| query | Yes | Search text (minimum 2 characters) | |
| country | No | ISO country code to filter city suggestions by country |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only/idempotent safety profile. Description adds valuable behavioral context: the transformation logic (partial input → full suggestions), workflow positioning (pre-search), and concrete example of ambiguity resolution.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with core function ('Autocomplete suggestions...'), followed immediately by usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage for input side (100% schema coverage, clear behavioral example). Minor gap: no output schema exists and description doesn't specify return format (array structure, fields), though 'suggestions' implies the general output nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description maps 'query' type to 'categories/business names' and provides example input ('Mün') but does not add extensive format details or constraints beyond what schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Autocomplete) + resource (suggestions for cities/categories/business names). Distinguishes from search_businesses sibling by positioning it as preparatory step ('Use before searching').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('before searching') and for what purpose ('resolve ambiguous user input'). Provides concrete example ('Mün' → 'München') illustrating the disambiguation workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!