Skip to main content
Glama

discava – Business Directory for AI

Ownership verified

Server Details

Search for local businesses worldwide. Structured data optimized for AI agents. • Search Millions of businesses over 49 countries (Europe, Northamerica, Southamerica, Asia, Oceania) • Quality & demand scoring for every business • Ranking based on real user click-through data

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
get_businessA
Read-onlyIdempotent
Inspect

Get complete details for one or more businesses including address, phone, website, opening hours, services, payment methods, social links, logo, business image, and coordinates. Pass comma-separated IDs for batch requests.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesOne or more business IDs, comma-separated for batch (e.g. "id1,id2,id3"). ALWAYS use batch when fetching multiple businesses.
formatNoResponse format: "json" (default) or "html" for interactive cards
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, confirming the safe, repeatable nature of the operation. The description adds valuable behavioral context by enumerating the specific data fields returned (services, payment methods, coordinates, etc.), which compensates for the lack of an output schema. It does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two high-density sentences. The first front-loads the core capability and enumerates return fields; the second provides the batching instruction. There is no redundant or filler text—every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description excellently compensates by listing all major data fields returned (address, phone, social links, coordinates, etc.). Combined with comprehensive annotations and a simple 2-parameter schema, the description provides sufficient context for an agent to understand both the input requirements and the expected data payload.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with detailed explanations and examples (e.g., 'e.g. "id1,id2,id3"'). The description mentions 'Pass comma-separated IDs for batch requests', reinforcing the batching concept, but adds no additional syntax, validation rules, or semantic details beyond what the schema already provides. Baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('businesses'), explicitly scopes the operation ('complete details'), and lists specific data fields returned (address, phone, website, etc.). The mention of 'comma-separated IDs' distinguishes this from the sibling 'search_businesses', implying this tool requires specific identifiers rather than search queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear implicit guidance by mentioning 'Pass comma-separated IDs', indicating this tool is for when specific business IDs are already known. It also promotes efficient usage by highlighting batch request capability. However, it does not explicitly name 'search_businesses' as the alternative for when IDs are unknown, relying on the user to infer this from the tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rankingsA
Read-onlyIdempotent
Inspect

Get the most popular businesses in an area, ranked by real user click-through data. Use to find top-rated businesses by category.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name to filter results
langNoLanguage for category labels: "de", "en", "fr", "nl", "it", "es", "pt", "pl"
limitNoNumber of results (1-20, default 10)
countryYesISO country code (required, e.g. "DE", "AT", "US")
categoryNoCategory slug or name (e.g. "plumber", "restaurant", "Klempner")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations by specifying the ranking methodology ('real user click-through data'). However, misses opportunity to disclose return format, pagination behavior with the 1-20 limit, or error conditions despite annotations covering the safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines functionality and data source, second states use case. Front-loaded with essential information and appropriate length for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and comprehensive annotations covering safety/read-only aspects, the description adequately covers the tool's purpose and ranking methodology. Missing output details are acceptable since no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description mentions 'area' (aligning with city/country) and 'category' which reinforces parameter usage, but adds no semantic details beyond schema descriptions (e.g., doesn't explain the lang parameter's purpose or ISO format for country).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('businesses'), with clear differentiating factor ('ranked by real user click-through data') that hints at distinction from sibling search_businesses. However, it doesn't explicitly differentiate from search_businesses or get_business.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('Use to find top-rated businesses by category') suggesting when to use it, but lacks explicit when-not-to-use guidance or named alternatives. Doesn't clarify when to prefer this over search_businesses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkA
Read-onlyIdempotent
Inspect

Check if the discava API is online and responding. Returns status and version. No parameters required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing the safety profile. The description adds valuable behavioral context by specifying that it 'Returns status and version,' which compensates for the missing output schema. It does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three efficient sentences with no filler content. It is front-loaded with the core purpose ('Check if the discava API is online'), followed by return values and parameter requirements. Every sentence earns its place in the 15-word total.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity as a health check endpoint and the richness of annotations (covering safety and idempotency), the description provides complete context. It covers the purpose, return values (compensating for no output schema), and parameter requirements sufficiently for an agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the baseline score is 4 per the scoring rubric. The description confirms 'No parameters required,' which aligns with the empty input schema and its description. No additional parameter semantics are needed or provided beyond this confirmation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific verb 'Check' and resource 'discava API' with clear scope ('online and responding'). It effectively distinguishes from siblings like get_business and send_feedback by focusing on infrastructure health rather than business data operations. The mention of returning 'status and version' further clarifies the diagnostic nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context that this tool verifies API availability, which inherently signals when to use it (when checking connectivity). However, it lacks explicit guidance such as 'use this before calling other tools' or explicit contrast with sibling tools. The usage is clear from context but not explicitly prescribed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_businessesA
Read-onlyIdempotent
Inspect

Search for local businesses. Returns name, category, city, country, logo_url, available_details (what data exists), and scores. Check available_details to see what is available, then call get_business for full details.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude for distance calculation
lonNoLongitude for distance calculation
cityNoCity name (e.g. "Hamburg", "Wien", "New York")
langNoLanguage for category labels: "de", "en", "fr", "nl", "it", "es", "pt", "pl"
pageNoPage number for pagination (default 1)
limitNoResults per page (1-50, default 10)
queryNoSearch query (e.g. "plumber", "Zahnarzt", "Italian restaurant")
formatNoResponse format: "json" (default) or "html" for interactive cards
countryYesISO country code (required, e.g. "DE", "AT", "US")
categoryNoCategory slug (e.g. "plumber", "restaurant", "dentist")
min_confidenceNoMinimum confidence score 0-100 to filter low-quality entries
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, safe, idempotent behavior. The description adds valuable behavioral context about data structure: it lists returned fields (name, category, scores, available_details) and explains the partial-data pattern where available_details indicates data completeness. This helps agents understand why they might need a second call.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy: sentence one states purpose, sentence two documents return values (compensating for missing output schema), and sentence three provides workflow guidance. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively lists return fields (name, category, logo_url, available_details, scores) and explains the two-step workflow with get_business. Given the high schema coverage and clear behavioral annotations, the description provides complete context for tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across 11 parameters (lat/lon, city, query, category, etc.), the schema fully documents inputs. The description correctly focuses on behavior and workflow rather than repeating parameter details that are already comprehensively documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Search' and clear resource 'local businesses'. It explicitly distinguishes from sibling tool get_business by stating this returns summary data and directing users to 'call get_business for full details', establishing a clear functional boundary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Check available_details to see what is available, then call get_business for full details.' This clearly indicates when to use this tool (discovery/filtering) versus the sibling (detail retrieval), effectively defining the tool's place in a multi-step process.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_feedbackAInspect

Report data quality issues for a business. Use when you notice incorrect phone numbers, wrong addresses, outdated info, or closed businesses.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesType of feedback: POSITIVE (correct data), NEGATIVE (wrong data), NOT_FOUND (business gone), PHONE_INVALID, WEB_INVALID, HOURS_WRONG, DUPLICATE
commentNoFree text description of the issue or suggested correction
business_idYesBusiness ID to report about
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive write operation (readOnlyHint=false, destructiveHint=false). The description adds domain context that this is specifically for 'data quality issues,' but does not disclose behavioral details beyond annotations such as whether feedback is queued for review, if confirmation is sent, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose, second provides usage conditions. Information is front-loaded and every clause earns its place by conveying either the core action or specific triggering conditions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 3-parameter schema with complete coverage and no output schema, the description adequately covers the tool's scope. It appropriately references the feedback categories without needing to explain return values. Minor gap: does not clarify if feedback is immediate or batched, or user attribution requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by mapping real-world scenarios (phone numbers, addresses, closed businesses) to the abstract enum types in the schema, helping the agent understand which 'type' value to select for specific issues. It implies the business_id target without needing to repeat schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Report[s] data quality issues for a business' using a specific verb and resource. It effectively distinguishes from sibling retrieval tools (get_business, search_businesses) by specifying this is a reporting/feedback mechanism rather than a data retrieval operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit positive guidance ('Use when you notice...') with concrete examples (incorrect phone numbers, wrong addresses, closed businesses) that map to the enum values. However, it lacks explicit negative guidance or differentiation from the sibling 'suggest' tool which might also involve submitting business-related feedback.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggestA
Read-onlyIdempotent
Inspect

Autocomplete suggestions for cities or categories/business names. Use before searching to resolve ambiguous user input (e.g. "Mün" → "München").

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNo"city" for city name suggestions, "query" for category and business name suggestions
limitNoMaximum number of suggestions (1-15, default 10)
queryYesSearch text (minimum 2 characters)
countryNoISO country code to filter city suggestions by country
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only/idempotent safety profile. Description adds valuable behavioral context: the transformation logic (partial input → full suggestions), workflow positioning (pre-search), and concrete example of ambiguity resolution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with core function ('Autocomplete suggestions...'), followed immediately by usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for input side (100% schema coverage, clear behavioral example). Minor gap: no output schema exists and description doesn't specify return format (array structure, fields), though 'suggestions' implies the general output nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description maps 'query' type to 'categories/business names' and provides example input ('Mün') but does not add extensive format details or constraints beyond what schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Autocomplete) + resource (suggestions for cities/categories/business names). Distinguishes from search_businesses sibling by positioning it as preparatory step ('Use before searching').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('before searching') and for what purpose ('resolve ambiguous user input'). Provides concrete example ('Mün' → 'München') illustrating the disambiguation workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources