Skip to main content
Glama
lordbasilaiassistant-sudo

base-security-scanner-mcp

compare_bytecode

Analyze two Base mainnet contract addresses to detect cloned code by comparing bytecode and calculating similarity scores.

Instructions

Compare bytecode of two contracts on Base mainnet for clone detection. Returns similarity score and whether they share the same code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
address1YesFirst contract address
address2YesSecond contract address

Implementation Reference

  • The handler for the compare_bytecode tool, which compares the bytecode of two given contract addresses and returns a similarity score based on exact matching and function selector intersections.
    server.tool(
      "compare_bytecode",
      "Compare bytecode of two contracts on Base mainnet for clone detection. Returns similarity score and whether they share the same code.",
      {
        address1: z.string().describe("First contract address"),
        address2: z.string().describe("Second contract address"),
      },
      async ({ address1, address2 }) => {
        try {
          const [code1, code2] = await Promise.all([
            getContractBytecode(address1),
            getContractBytecode(address2),
          ]);
    
          const isContract1 = code1 !== "0x" && code1.length > 2;
          const isContract2 = code2 !== "0x" && code2.length > 2;
    
          if (!isContract1 || !isContract2) {
            return ok({
              address1,
              address2,
              isContract1,
              isContract2,
              match: false,
              similarity: 0,
              message: "One or both addresses are not contracts",
            });
          }
    
          const exactMatch = code1 === code2;
    
          // Calculate similarity: compare bytecode chunks
          let similarity = 0;
          if (exactMatch) {
            similarity = 100;
          } else {
            // Compare selectors as a proxy for functional similarity
            const sel1 = new Set(extractSelectors(code1));
            const sel2 = new Set(extractSelectors(code2));
            const intersection = new Set([...sel1].filter(s => sel2.has(s)));
            const union = new Set([...sel1, ...sel2]);
            similarity = union.size > 0 ? Math.round((intersection.size / union.size) * 100) : 0;
          }
    
          const types1 = identifyContractType(extractSelectors(code1));
          const types2 = identifyContractType(extractSelectors(code2));
    
          return ok({
            address1,
            address2,
            bytecodeSize1: (code1.length - 2) / 2,
            bytecodeSize2: (code2.length - 2) / 2,
            exactMatch,
            selectorSimilarity: similarity,
            contractTypes1: types1,
            contractTypes2: types2,
            verdict: exactMatch
              ? "Exact clone — identical bytecode"
              : similarity > 80
              ? "Very similar — likely forked from same source"
              : similarity > 50
              ? "Moderately similar — may share common patterns"
              : "Different contracts",
          });
        } catch (err) {
          return fail(`compare_bytecode failed: ${err instanceof Error ? err.message : String(err)}`);
        }
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a similarity score and a boolean for shared code, but does not explain how the comparison works, what the score range means, whether it requires authentication, rate limits, or any side effects. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and return value with zero wasted words. It is appropriately sized for a tool with two parameters and clear functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparing bytecode for clone detection), no annotations, and no output schema, the description is minimally complete. It states what the tool does and returns, but lacks details on behavior, usage context, or output interpretation, which are needed for full understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('address1' and 'address2') as contract addresses. The description adds no additional meaning beyond what the schema provides, such as format requirements or examples, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('compare bytecode') and resources ('two contracts on Base mainnet'), and distinguishes its function from siblings by specifying 'for clone detection' and the return type ('similarity score and whether they share the same code'). This is more specific than generic siblings like 'analyze_bytecode' or 'get_contract_info'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or compare it to sibling tools like 'analyze_bytecode' or 'scan_contract', leaving the agent to infer usage context solely from the purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lordbasilaiassistant-sudo/base-security-scanner-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server