Skip to main content
Glama
ahnlabio

BICScan MCP Server

by ahnlabio

get_assets

Retrieve cryptocurrency asset holdings for blockchain addresses, including EOAs, contracts, and domain names, to analyze portfolio composition.

Instructions

Get Assets holdings by CryptoAddress

Args:
    address: EOA, CA, ENS, CNS, KNS.
Returns:
    Dict: where assets is a list of assets

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
addressYes

Implementation Reference

  • The main handler function for the get_assets MCP tool. It is decorated with @mcp.tool() for registration and implements the logic to query the BICScan API for assets held by the given address.
    @mcp.tool()
    async def get_assets(address: str) -> dict:
        """Get Assets holdings by CryptoAddress
    
        Args:
            address: EOA, CA, ENS, CNS, KNS.
        Returns:
            Dict: where assets is a list of assets
        """
    
        logger.info(f"Getting assets for address: {address}")
        endpoint = "/v1/scan"
        data = {
            "query": address,
            "sync": True,
            "assets": True,
            "engines": ["ofac"],
        }
    
        return await post_request(endpoint, data=data)
  • Supporting helper function used by get_assets (and other tools) to perform authenticated POST requests to the BICScan API, handling errors and logging.
    async def post_request(
        endpoint: str, data: dict[str, Any] | None = None
    ) -> dict[str, Any] | None:
        """Make a request to BICScan API with proper error handling."""
        headers = {
            "User-Agent": "bicscan-mcp/1.0",
            "Accept": "application/json",
            "X-Api-Key": BICSCAN_API_KEY,
        }
        url = urljoin(BICSCAN_API_BASE, endpoint)
    
        async with httpx.AsyncClient() as client:
            try:
                logger.info(f"Making request to {url}")
                logger.debug(f"{headers=} {data=}")
                response = await client.post(url, headers=headers, json=data, timeout=30)
                response.raise_for_status()
                logger.info(f"Received response: {response.status_code}")
                return response.json()
            except httpx.HTTPStatusError as http_err:
                logger.error(f"Received response: {http_err}, {response.text}")
                return response.json()
            except Exception as e:
                logger.exception(f"Received response: {e}, {response.text}")
                return {}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation which implies read-only behavior, but doesn't disclose any behavioral traits like rate limits, authentication requirements, error conditions, or what happens with invalid addresses. The description mentions the return format ('Dict: where assets is a list of assets') but provides minimal detail about the response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured with clear sections for Args and Returns. The main purpose is stated upfront, followed by parameter details and return information. There's minimal wasted text, though the formatting could be slightly cleaner (e.g., consistent capitalization in the Returns section).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter with multiple formats), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and parameter semantics well, but lacks behavioral context and usage guidance. For a tool that presumably queries blockchain data, more information about limitations, data freshness, or error handling would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant value beyond the input schema, which has 0% description coverage. It explains that the 'address' parameter accepts multiple formats: 'EOA, CA, ENS, CNS, KNS' (presumably Externally Owned Account, Contract Address, Ethereum Name Service, etc.). This semantic clarification is crucial for proper tool invocation and compensates well for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get Assets holdings by CryptoAddress' specifies both the action (get) and the resource (assets holdings). It distinguishes from the sibling tool 'get_risk_score' by focusing on asset holdings rather than risk assessment. However, it doesn't fully differentiate from potential other asset-related tools beyond the single sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate, what scenarios it's designed for, or any prerequisites for its use. The single sibling tool 'get_risk_score' is not referenced, leaving the agent with no comparative context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ahnlabio/bicscan-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server