GBG Loqate - Reach
Server Details
Verify addresses, email addresses, and phone numbers with confidence scores.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- gbgplc/lqt
- GitHub Stars
- 2
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose: two for viewing policies, and four for different types of verification (address, combined contact, email, phone). No overlap or ambiguity.
All tool names follow a consistent verb_noun pattern (list_policies, show_policy, verify_address, etc.) with underscores. The naming conventions are uniform and predictable.
Six tools is an appropriate number for a verification service, covering policy listing and details, and separate verification methods for address, email, phone, and combined contact.
The tool set covers the core domain: viewing policies and verifying addresses, emails, phones, and combined contacts. No obvious missing operations for the intended use case.
Available Tools
6 toolslist_policiesARead-onlyInspect
List available decisioning policies (strict, shipping, standard, permissive) with their thresholds.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the agent knows this is a safe read. The description adds value by specifying the exact policies listed and that thresholds are included, providing useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the core functionality immediately. There is no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and lists the policy types, but does not hint at the output format or structure. Since there is no output schema, a bit more detail on what is returned (e.g., list of policy names and thresholds) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description does not need to add parameter info. Schema coverage is 100% (no params). Baseline for 0 parameters is 4, and the description does not contradict or add anything misleading.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists available decisioning policies along with specifics like the four policy names (strict, shipping, standard, permissive) and thresholds. This differentiates it from siblings like show_policy which likely shows details of a single policy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for listing all policies, and the sibling show_policy suggests an alternative for viewing a single policy. However, it does not explicitly state when not to use this tool or provide direct comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
show_policyARead-onlyInspect
Show full details for a specific decisioning policy.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Policy name: strict shipping standard or permissive |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so agent knows it's safe. Description adds minimal behavioral context ('show full details') but does not describe what 'full details' includes or any constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 9 words, front-loaded with verb. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, description is sufficient to understand purpose. However, lack of output format may cause some uncertainty.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides 100% coverage for the single 'name' parameter. Description does not add additional meaning beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description 'Show full details for a specific decisioning policy' clearly identifies the verb and resource, distinguishing it from sibling tools like list_policies (which lists all policies) and verification tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing details of a specific policy, but does not explicitly contrast with list_policies or provide when-not-to-use guidance. No alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_addressARead-onlyInspect
Verify an address against Loqate's global reference data. Returns a confidence score (0-1), verification status, and a policy-driven accept/review/reject recommendation. Requires a Loqate API key — pass it via the 'key' field (get one at account.loqate.com).
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Loqate API key — required unless the server has one configured (get one at account.loqate.com) | |
| policy | No | Policy name: strict shipping standard (default) or permissive | |
| address | No | Full address string (free-form — use this OR structured fields below) | |
| country | No | ISO 2-letter country code (e.g. US GB DE) | |
| options | No | Loqate API options (e.g. GeoCode, Certify, DefaultCountry, OutputCasing) | |
| premise | No | Premise or house number | |
| address2 | No | Second address line | |
| address3 | No | Third address line | |
| address4 | No | Fourth address line | |
| address5 | No | Fifth address line | |
| address6 | No | Sixth address line | |
| address7 | No | Seventh address line | |
| address8 | No | Eighth address line | |
| building | No | Building name | |
| latitude | No | Latitude for reverse geocode | |
| locality | No | City or town | |
| post_box | No | PO Box number | |
| postcode | No | Postal or ZIP code | |
| longitude | No | Longitude for reverse geocode | |
| admin_area | No | State or province | |
| verify_key | No | Custom address verification API key (overrides LOQATE_VERIFY_KEY env var) | |
| verify_url | No | Custom address verification endpoint URL (overrides LOQATE_VERIFY_URL env var) | |
| organization | No | Organization or business name | |
| sub_building | No | Sub-building (e.g. apartment, suite) | |
| thoroughfare | No | Street name | |
| delivery_address | No | Full delivery address | |
| delivery_address1 | No | Delivery address line 1 | |
| delivery_address2 | No | Delivery address line 2 | |
| delivery_address3 | No | Delivery address line 3 | |
| delivery_address4 | No | Delivery address line 4 | |
| delivery_address5 | No | Delivery address line 5 | |
| delivery_address6 | No | Delivery address line 6 | |
| delivery_address7 | No | Delivery address line 7 | |
| delivery_address8 | No | Delivery address line 8 | |
| dependent_locality | No | Dependent locality (e.g. neighborhood) | |
| sub_building_floor | No | Floor number | |
| dependent_thoroughfare | No | Dependent street name | |
| sub_administrative_area | No | Sub administrative area (e.g. county) | |
| double_dependent_locality | No | Double dependent locality | |
| super_administrative_area | No | Super administrative area (e.g. region) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only (readOnlyHint: true) and open-world (openWorldHint: true). The description adds useful behavioral context: it requires an API key, uses Loqate data, and returns specific outputs. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the core purpose. No wasted words; every sentence is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 40 parameters and no output schema, the description covers the core purpose, prerequisites, and return format. It could mention the option between free-form and structured address input, but the schema descriptions largely compensate. Overall, adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so each parameter is described. The tool description provides high-level context (requires key, policy-driven) but does not add significant meaning beyond what the schema offers. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Verify an address against Loqate's global reference data' with specific verb and resource, and distinguishes from sibling tools (e.g., verify_email, verify_phone) by focusing on address verification. It also describes the return format (confidence score, status, recommendation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly requires a Loqate API key and guides the user on how to obtain it. It implies usage by mentioning 'policy-driven accept/review/reject recommendation', but does not explicitly state when to use this tool versus alternatives. However, the sibling names make the context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_contactARead-onlyInspect
Verify address, email, and/or phone together. Returns individual results plus an overall recommendation (most conservative of all provided fields). Requires a Loqate API key — pass it via the 'key' field.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Loqate API key — required unless the server has one configured | |
| No | Email address to verify | ||
| phone | No | Phone number to verify | |
| policy | No | Policy name: strict shipping standard (default) or permissive | |
| address | No | Full address string | |
| country | No | ISO 2-letter country code | |
| options | No | Loqate API options (e.g. GeoCode, Certify, DefaultCountry, OutputCasing) | |
| premise | No | Premise or house number | |
| address2 | No | Second address line | |
| address3 | No | Third address line | |
| address4 | No | Fourth address line | |
| address5 | No | Fifth address line | |
| address6 | No | Sixth address line | |
| address7 | No | Seventh address line | |
| address8 | No | Eighth address line | |
| building | No | Building name | |
| latitude | No | Latitude for reverse geocode | |
| locality | No | City or town | |
| post_box | No | PO Box number | |
| postcode | No | Postal or ZIP code | |
| longitude | No | Longitude for reverse geocode | |
| admin_area | No | State or province | |
| verify_key | No | Custom address verification API key (overrides LOQATE_VERIFY_KEY env var) | |
| verify_url | No | Custom address verification endpoint URL (overrides LOQATE_VERIFY_URL env var) | |
| organization | No | Organization or business name | |
| sub_building | No | Sub-building (e.g. apartment, suite) | |
| thoroughfare | No | Street name | |
| delivery_address | No | Full delivery address | |
| delivery_address1 | No | Delivery address line 1 | |
| delivery_address2 | No | Delivery address line 2 | |
| delivery_address3 | No | Delivery address line 3 | |
| delivery_address4 | No | Delivery address line 4 | |
| delivery_address5 | No | Delivery address line 5 | |
| delivery_address6 | No | Delivery address line 6 | |
| delivery_address7 | No | Delivery address line 7 | |
| delivery_address8 | No | Delivery address line 8 | |
| dependent_locality | No | Dependent locality (e.g. neighborhood) | |
| sub_building_floor | No | Floor number | |
| dependent_thoroughfare | No | Dependent street name | |
| sub_administrative_area | No | Sub administrative area (e.g. county) | |
| double_dependent_locality | No | Double dependent locality | |
| super_administrative_area | No | Super administrative area |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds that it returns individual results and an overall recommendation, but does not disclose additional behavioral traits like rate limits or error conditions beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with purpose and output, no fluff. Every sentence contributes essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 42 parameters and no output schema, the description provides a high-level overview of functionality and the combined recommendation logic. It could mention the response structure more explicitly, but it is adequate for an agent to understand the tool's purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by specifying that the 'key' parameter is required unless server-configured and that 'policy' defaults to 'strict shipping standard', going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'verify' and the resources (address, email, phone) together, distinguishing it from sibling tools like verify_address, verify_email, verify_phone which handle single fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the requirement for a Loqate API key and implies combined verification, but lacks explicit guidance on when not to use it (e.g., for single-field verification, use the specific sibling tools).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_emailARead-onlyInspect
Verify an email address via Loqate. Returns confidence, risk level, and accept/review/reject recommendation. Requires a Loqate API key — pass it via the 'key' field.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Loqate API key — required unless the server has one configured | |
| Yes | Email address to verify | ||
| policy | No | Policy name: strict shipping standard (default) or permissive |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. The description adds that the tool calls the Loqate external API and requires an API key, which goes beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with purpose, and contains no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema, annotations, and sibling tools, the description provides sufficient context: the external service (Loqate), the required authentication (API key), and the output fields. No output schema exists, so the return value description is helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 3 parameters. The description adds context for the 'key' parameter (required) and mentions return values, but does not add further detail beyond schema for email and policy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool verifies an email address via Loqate and lists specific return values (confidence, risk level, recommendation). It distinguishes from sibling tools that verify addresses, contacts, and phones.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use the tool (for email verification) and how to provide the required API key. It does not explicitly exclude alternatives, but the context of sibling tools makes the usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_phoneARead-onlyInspect
Verify a phone number via Loqate. Returns confidence, number type, carrier, and accept/review/reject recommendation. Requires a Loqate API key — pass it via the 'key' field.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Loqate API key — required unless the server has one configured | |
| phone | Yes | Phone number (E.164 format preferred e.g. +442071234567) | |
| policy | No | Policy name: strict shipping standard (default) or permissive | |
| country | No | ISO 2-letter country code (helps with parsing if no country prefix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds behavioral context beyond annotations by specifying the required API key, return fields, and that verification is read-only (consistent with readOnlyHint=true).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no wasted words. Every sentence provides essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers purpose and key parameter, but does not explain the 'policy' or 'country' parameters or return structure, which would help with completeness given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds meaning beyond schema: clarifies that 'key' is required unless server-configured, and recommends E.164 phone format. Schema descriptions already covered 100% of parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool verifies a phone number via Loqate and lists return values (confidence, number type, etc.), distinguishing it from sibling verification tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for phone number verification and mentions API key requirement, but does not explicitly state when to use this tool vs alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!