Skip to main content
Glama

LimitGuard Trust Intelligence

Server Details

Entity verification, sanctions screening, and trust scoring for AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
check_agentAInspect

Verify AI agent trust via LimitGuard.

    Checks if an AI agent is trusted based on its identifier.
    Used for multi-agent systems to verify delegation targets.

    Args:
        agent_id: Unique agent identifier
        agent_name: Human-readable agent name
    
ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYes
agent_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds valuable context by naming the 'LimitGuard' system and specifying 'delegation targets' use case. However, it fails to confirm whether this is read-only (implied by 'verify' but not explicit), what happens for untrusted agents, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with summary first ('Verify AI agent trust'), followed by elaboration and Args section. The Args block is necessary given zero schema coverage. Minor redundancy exists between the first two sentences, but overall efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a 2-parameter verification tool. Mentions specific technology (LimitGuard) and domain context (multi-agent delegation). Output schema exists (per context signals) so return value description is not required, though error handling could be noted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (only titles). The Args block compensates by explaining agent_id as 'Unique agent identifier' and agent_name as 'Human-readable agent name', adding necessary semantic meaning. Score reflects effective compensation for poor schema coverage, though parameter descriptions are basic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific purpose: 'Verify AI agent trust via LimitGuard' with verb (verify), resource (agent trust), and mechanism (LimitGuard). It distinguishes from siblings like get_trust_score by framing this as a binary trust check for delegation targets. However, the first two sentences are slightly redundant, and the distinction from verify_wallet could be explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides contextual usage ('Used for multi-agent systems to verify delegation targets') indicating when to use it. However, it lacks explicit guidance on when NOT to use it or direct comparison to siblings like get_trust_score (use this for binary verification vs. numerical scores) or verify_wallet.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_entityAInspect

Check entity trust score via LimitGuard API.

    Full trust intelligence check on a business entity.
    Returns trust score (0-100), risk level, and recommendation.

    Args:
        entity_name: Full legal name of the entity
        country: ISO 3166-1 alpha-2 country code (e.g., NL, BE, DE)
        kvk_number: Optional Dutch KVK registration number (8 digits)
        domain: Optional company website domain
    
ParametersJSON Schema
NameRequiredDescriptionDefault
domainNo
countryYes
kvk_numberNo
entity_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return values ('trust score (0-100), risk level, and recommendation') and external API source ('LimitGuard'), but omits operational details like rate limits, authentication requirements, error handling, or whether the check is synchronous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded purpose in first sentence. Args block is unusually structured (Python-docstring style) but necessary given 0% schema coverage. Minor redundancy between 'Check entity trust score' and 'Full trust intelligence check', but no wasted words in parameter documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage, the detailed Args section ensures completeness. With output schema present, the brief mention of return values is sufficient. Only gap is lack of explicit sibling differentiation for the crowded 'trust/risk score' namespace.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, making description critical. The Args section comprehensively documents all 4 parameters: adds format specifications ('ISO 3166-1 alpha-2', '8 digits'), examples ('NL, BE, DE'), legal context ('Dutch KVK'), and optionality ('Optional') - fully compensating for the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Check entity trust score') and resource ('business entity') via 'LimitGuard API'. Mentions 'Full trust intelligence check' implying comprehensiveness, but does not explicitly differentiate from siblings 'get_risk_score' or 'get_trust_score' which sound similar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this comprehensive check versus the sibling tools 'get_risk_score' or 'get_trust_score'. No prerequisites or error conditions mentioned. Only implicit usage context via parameter descriptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_risk_scoreAInspect

Quick risk assessment without full trust check.

    Faster endpoint that focuses on risk signals only.
    Use when you only need basic risk evaluation.

    Args:
        entity_name: Full legal name of the entity
        country: ISO 3166-1 alpha-2 country code
    
ParametersJSON Schema
NameRequiredDescriptionDefault
countryYes
entity_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and successfully discloses key behavioral traits: it's 'Quick' and a 'Faster endpoint' compared to alternatives, and explicitly scopes the operation to 'risk signals only' rather than comprehensive trust verification.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded. The Args section is necessary given zero schema coverage. Minor redundancy between 'Quick' and 'Faster endpoint' sentences, but each adds distinct context (speed vs. endpoint type).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a tool with existing output schema: documents both required parameters, explains domain-specific behavior (risk vs trust), and provides selection criteria. No explanation of return values needed since output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description fully compensates by providing exact semantics for both parameters in the Args section: entity_name is the 'Full legal name' and country requires 'ISO 3166-1 alpha-2' format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('risk assessment') and clearly distinguishes from sibling tool get_trust_score by stating it operates 'without full trust check' and focuses on 'risk signals only', establishing its specific scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear when-to-use guidance ('Use when you only need basic risk evaluation') and implies the alternative (get_trust_score for full checks), though it doesn't explicitly name the alternative tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trust_scoreAInspect

Quick trust score lookup by entity ID.

    Fast lookup for previously checked entities.
    Returns cached score if available.

    Args:
        entity_id: Entity identifier (KVK number, domain, or hash)
    
ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable behavioral context about caching ('Returns cached score if available') and performance ('Fast'). However, lacks disclosure on error handling, rate limits, or what returns when entity not found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose in first sentence. Subsequent sentences add distinct value (caching behavior, parameter docs). 'Args:' section is slightly formal for MCP but efficient. No redundant filler, though could be slightly more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter lookup tool with existing output schema (per rules, return values need not be described). Covers caching behavior adequately. Missing explicit distinction from sibling get_risk_score and error-case handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, making description compensation essential. The text successfully documents entity_id semantics by specifying valid formats: 'KVK number, domain, or hash,' which adds substantial meaning absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (lookup) and resource (trust score) with scope (by entity ID). Mentions 'previously checked entities' which implies distinction from real-time checks, though sibling differentiation could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context via 'Fast lookup for previously checked entities' and 'cached score,' suggesting when to use for speed vs. fresh data. However, lacks explicit 'when not to use' or named sibling alternatives (e.g., does not contrast with check_entity).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_walletAInspect

Check wallet trust score for crypto payments.

    Verifies wallet against scam lists and transaction patterns.
    Supports EVM (0x...) and Solana (base58) addresses.

    Args:
        wallet_address: Blockchain wallet address
        chain_id: CAIP-2 chain ID (default: eip155:8453 for Base)
    
ParametersJSON Schema
NameRequiredDescriptionDefault
chain_idNoeip155:8453
wallet_addressYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Successfully discloses verification methodology (scam lists, transaction patterns) and address format support. Missing safety profile (read-only vs destructive) and performance characteristics despite presence of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose sentence followed by verification details and Args section. Slightly unusual indentation/formatting but every sentence earns its place by conveying distinct information (purpose, methodology, format support, parameters).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for complexity level: 2 simple parameters with output schema present. Description adequately covers tool purpose, verification behavior, and parameter semantics without needing to duplicate output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Args section adds crucial semantics: wallet_address is 'Blockchain wallet address', chain_id is 'CAIP-2 chain ID' with concrete default example ('eip155:8453 for Base'), clarifying formats the schema omits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear verb+resource ('Check wallet trust score') and domain ('crypto payments'). Distinguishes from check_agent/check_entity by targeting wallets, though it doesn't clarify relationship to sibling get_trust_score which could cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by specifying verification against 'scam lists and transaction patterns' and supported address formats (EVM/Solana). However, lacks explicit when-to-use guidance or comparison with get_trust_score/check_entity alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources