Hive Agent Kyc
Server Details
KYA agent identity verification and trust scoring for autonomous A2A networks
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-agent-kyc
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool targets a distinct KYC operation: checking FATF lists, checking OFAC lists, querying past screenings, and screening addresses. No overlap between tools.
All tools follow a consistent 'agent_kyc_verb_noun' pattern in snake_case, making it predictable and easy to understand.
With 4 tools, the server is focused and lean, covering the essential KYC/AML operations without unnecessary bloat.
Covers list checks, address screening, and audit trail, but lacks a tool for configuration or provider selection. Minor gap.
Available Tools
4 toolsagent_kyc_check_fatf_listAInspect
Check whether a country code is on the FATF Call-for-Action or Increased-Monitoring lists. Free. Returns list category and FATF source URL. Snapshot is updated when FATF publishes (triannual). Broker/observer layer only.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO-3166-1 alpha-2 country code (e.g. IR, KP, MM) | |
| requester_did | No | Optional DID of requesting agent (logged for audit) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool is free, the snapshot update frequency (triannual), and that it is for broker/observer layer only. While it does not explicitly label it as read-only, the behavior is logically inferred.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at four sentences, front-loaded with the primary purpose, and includes essential details without redundancy. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description helpfully outlines the return structure. It also mentions update frequency and usage restrictions. It is fairly complete for a simple lookup tool, though it lacks error handling or edge case info.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds the return values (list category, source URL) but does not provide new semantics for the parameters beyond what is in the schema. Baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('check'), the specific resource ('FATF Call-for-Action or Increased-Monitoring lists'), and the input (country code). It also mentions the free nature and return values, making it distinct from sibling tools like OFAC checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Broker/observer layer only' which gives some usage context, but it does not explicitly state when to choose this tool over alternatives such as 'agent_kyc_check_ofac_list' or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_kyc_check_ofac_listAInspect
Check whether a target identifier (address, name, or ID) appears on the OFAC SDN public sanctions list. Free. Sources the list directly from treasury.gov and caches for 24h. Returns the match record verbatim from the public list. Broker/observer layer only.
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Address, full name, or other public identifier to check against the OFAC SDN list | |
| requester_did | No | Optional DID of requesting agent (logged for audit) | |
| identifier_type | No | 'address', 'name', or 'entity'. Defaults to 'address'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description fully compensates for lack of annotations by detailing data source (treasury.gov), caching duration (24h), response format (verbatim match record), and usage layer (broker/observer only). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with four sentences, each adding distinct information: function, cost, data source/caching, output format, and usage layer. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description provides sufficient context for selecting and invoking the tool: what it checks, where from, caching policy, output format, and usage restrictions. No obvious gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds limited value beyond property descriptions. It reiterates identifier types but does not detail the optional parameters or their semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Check' with a clear resource 'OFAC SDN public sanctions list'. It lists the types of identifiers (address, name, or ID) and differentiates from the sibling 'agent_kyc_check_fatf_list' by targeting a specific sanctions list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context by noting it is free and limited to broker/observer layer, which guides appropriateness. It implies the alternative sibling for FATF list but does not explicitly state when not to use this one.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_kyc_query_statusAInspect
Return the audit-log entry for a previously-issued screening query. Free. Returns query_id, requester DID, timestamp, provider used, result code, and a hash of the screened address. No PII is stored.
| Name | Required | Description | Default |
|---|---|---|---|
| query_id | Yes | query_id returned from agent_kyc_screen_address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses that the operation is free and no PII is stored, which adds transparency. It does not discuss side effects or errors, but given the read-only nature, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences: first states purpose, second adds key details (free, return fields, privacy). Zero waste, perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description covers purpose, return fields, and privacy. It lacks error handling or assumptions about query validity, but is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of query_id. The description repeats the same info from the schema ('query_id returned from agent_kyc_screen_address'), adding no new semantics. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns an audit-log entry for a previously-issued screening query, listing specific return fields (query_id, DID, timestamp, etc.). This distinguishes it from siblings like agent_kyc_check_fatf_list and agent_kyc_screen_address.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after a screening query, noting it is free and returns audit data. However, it does not explicitly state when to use this vs alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_kyc_screen_addressAInspect
Route a blockchain address screening request to a third-party KYC/AML provider (Chainalysis, TRM Labs, or Elliptic). Returns the provider's risk score and flags verbatim. Cost: $0.10 USDC on Base. Until partnership keys are configured, returns 503 with backend_pending. Broker/observer layer only — does not issue attestations.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain (base, ethereum, polygon, solana, bitcoin). Defaults to ethereum. | |
| address | Yes | Target blockchain address to screen | |
| provider | No | Preferred provider: 'chainalysis', 'trm', or 'elliptic'. Defaults to first available. | |
| requester_did | Yes | DID of the requesting agent (logged for audit) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses cost, error condition, return behavior (risk score and flags), and broker-only nature. Could clarify read-only status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences cover purpose, behavior, cost, and conditions. No redundancy, but slightly dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers cost, error handling, return values, and non-attestation. Adequate for agent decision with 4 params and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters documented in schema; description adds defaults for chain and provider, and explains requester_did usage (audit). Adds value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'route' with resource 'blockchain address screening request' and lists providers. Distinguishes from sibling tools (FATF, OFAC checks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions cost, 503 if keys not configured, and that it doesn't issue attestations. Lacks explicit when to use vs siblings but provides enough context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!