Skip to main content
Glama

Server Details

EVM audit (Slither + source + security.txt + MCP-probe + wallet-exposure). 6 tools + /trace.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 6 of 6 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: contract auditing, wallet exposure checking, source code fetching, security contact listing, endpoint probing, and chain support listing. The descriptions reinforce unique domains (security auditing, wallet analysis, source retrieval, contact discovery, server testing, and configuration).

Naming Consistency4/5

Tools follow a consistent verb_noun pattern (e.g., audit_contract, check_my_wallet_exposure, fetch_contract_source) with one minor deviation: 'supported_chains' uses an adjective_noun form instead of a verb. This small inconsistency doesn't hinder readability but prevents a perfect score.

Tool Count5/5

Six tools is well-scoped for a security auditing server, covering core areas like contract analysis, wallet safety, source access, contact information, endpoint validation, and configuration. Each tool earns its place without bloat or thin coverage.

Completeness4/5

The toolset provides strong coverage for security auditing workflows, including analysis, exposure checks, and disclosure contacts. A minor gap exists in lacking tools for updating or managing findings (e.g., a tool to track audit results over time), but agents can work around this with existing tools.

Available Tools

6 tools
audit_contractAInspect

Run Slither on a verified EVM contract and return a finding summary.

Zero-arg shape: audit_contract() with no arguments returns a pre- cached Slither summary from a phase-3.5 live scan of Uniswap V3 Factory (78 findings; 1 High, 42 Medium, 9 Low, 26 Informational). Real data from our pipeline, NOT fabricated. Pointers to the arg'd form are included in the demo_note field.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoOne of ethereum, base, optimism, arbitrum, polygon, gnosis, scroll, linea, blast, zksync (or short aliases mainnet / optim / arbi / poly). Default: ethereum.ethereum
addressNo0x-prefixed 40-hex-character contract address. Omit or pass ``None`` / empty string to trigger demo mode.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool runs Slither (a security analysis tool), returns a summary, includes a demo mode with pre-cached data from Uniswap V3 Factory, and notes that real data is used. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence provides important behavioral context about demo mode. The third sentence adds credibility about data authenticity. Each sentence earns its place, though the formatting with backticks and technical details slightly reduces readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security auditing tool with demo mode), no annotations, but 100% schema coverage and an output schema, the description is reasonably complete. It explains what the tool does, the demo behavior, and data authenticity. With an output schema present, it doesn't need to detail return values. The main gap is lack of error handling or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (chain and address). The description adds some value by explaining the zero-argument demo mode behavior and mentioning the demo_note field, but doesn't provide additional semantic meaning beyond what's in the schema. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run Slither on a verified EVM contract and return a finding summary.' This specifies the verb ('Run Slither'), resource ('verified EVM contract'), and output ('finding summary'). It distinguishes from siblings like fetch_contract_source (gets source code) or list_security_contacts (lists contacts).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: for security auditing of EVM contracts using Slither. It explicitly explains the zero-argument demo mode behavior. However, it doesn't specify when NOT to use it or mention alternatives among sibling tools, such as when you might need source code instead of audit results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_my_wallet_exposureAInspect

Read-only wallet-exposure report: scam flags + outstanding approvals.

Fetches Blockscout v2 /addresses/{addr} (is_scam, reputation, watchlist_names) + a bounded Approval-topic log lookup for the most recent outstanding ERC-20 approvals made by the address. Cross-refs each spender against a static bundled known-malicious-contract list.

SAFETY ENVELOPE (hard rules):

  • Read-only HTTP to Blockscout only; no user-provided URL input.

  • Per-chain whitelist: mainnet/ethereum, base-mainnet/ base, arbitrum-mainnet/arbitrum, optimism-mainnet/optimism, eth-sepolia/sepolia.

  • No signing; no filesystem writes.

  • Bounded output: max 50 approvals (with approvals_truncated flag).

Heuristic exposure_score:

  • high if is_scam=True OR any approval targets a known-malicious spender.

  • medium if strictly more than 10 outstanding infinite (MAX-uint) approvals.

  • low otherwise.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain alias from the whitelist. Default: ``ethereum``.ethereum
addressYes0x-prefixed 40-hex-character EVM address.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explicitly states the tool is read-only, details the specific API endpoint used, lists supported chains, confirms no signing or filesystem writes, describes output bounding (max 50 approvals), and explains the heuristic exposure scoring algorithm. This provides rich behavioral context beyond what would be in basic annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured with clear sections: purpose statement, data sources, safety envelope, and scoring algorithm. Every sentence adds value, though some technical details like specific API paths could be considered slightly verbose. The information is front-loaded with the core purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security assessment with multiple data sources and scoring logic), no annotations, and the presence of an output schema, the description is remarkably complete. It covers purpose, data sources, safety constraints, operational limits, and the scoring algorithm. With an output schema handling return values, the description focuses appropriately on behavioral context and usage guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline would be 3. However, the description adds significant value by explaining the chain parameter's whitelist in detail (listing all supported chains with aliases) and providing context about the address parameter's purpose (for wallet exposure assessment). This goes beyond the schema's basic descriptions, though it doesn't add format details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Read-only wallet-exposure report', 'Fetches Blockscout v2', 'Cross-refs each spender') and resources (wallet address, scam flags, approvals). It distinguishes itself from siblings like audit_contract or fetch_contract_source by focusing on wallet security assessment rather than contract analysis or source code retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for wallet security assessment specifically checking scam flags and outstanding approvals. It implicitly distinguishes from siblings by its unique focus on wallet exposure rather than contract auditing, source fetching, or endpoint probing. The safety envelope section also clarifies operational constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_contract_sourceAInspect

Return the verified Solidity source of a contract (no Slither).

Complement to audit_contract: for clients that want to run their own analyser, diff against a local fork, or just read the deployed source.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoOne of the supported aliases (see ``supported_chains``). Default: ethereum.ethereum
addressYes0x-prefixed 40-hex-character contract address.
include_contentsNoIf ``False``, omit file bodies and return only the file list + sizes. Default: ``True``.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the tool returns 'verified Solidity source' and excludes Slither analysis, which adds useful context about what the tool does and doesn't include. However, it doesn't disclose behavioral traits like rate limits, authentication requirements, or error conditions that would be helpful for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences, both of which earn their place. The first sentence states the core purpose, and the second provides crucial usage guidance and differentiation from siblings. There's zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (so return values are documented elsewhere), the description provides good contextual completeness. It clearly explains the tool's purpose, when to use it, and what it excludes. The main gap is lack of behavioral disclosure (rate limits, auth needs, etc.), but with output schema handling return values, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Return the verified Solidity source') and resource ('of a contract'), and explicitly distinguishes it from sibling 'audit_contract' by noting it's a 'complement' for clients who want to run their own analysis or read the deployed source. This provides clear differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool versus alternatives: 'Complement to ``audit_contract``: for clients that want to run their own analyser, diff against a local fork, or just read the deployed source.' This provides clear guidance on when this tool is appropriate versus its sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_security_contactsAInspect

Look up security.txt Contact entries from a static 319-domain snapshot.

Useful for clients that have just run audit_contract and now want to know where to disclose a finding. The snapshot was computed from a 2026-04-18 survey across DeFi / infra / audit-ecosystem domains. No live HTTP fetches — consult snapshot_date in the response for freshness.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainsYesList of 1-100 bare domain names (``"curve.fi"``). URL-like or path-containing inputs are rejected.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively. It explains key behavioral traits: the tool uses a static snapshot (not live data), includes a freshness indicator (snapshot_date in response), and specifies the data source (2026-04-18 survey across specific domains). However, it doesn't mention potential limitations like rate limits or authentication requirements, though these might not apply given the static nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. The first sentence immediately states the core purpose, the second provides usage context, and the third explains key behavioral constraints. Every sentence earns its place with no wasted words, and information is front-loaded for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, static data lookup), the description provides excellent contextual completeness. It explains the tool's purpose, when to use it, behavioral characteristics (static snapshot, no live fetches), and data freshness. With an output schema present, the description appropriately doesn't need to detail return values, focusing instead on the operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds meaningful context beyond the schema by explaining the purpose of the domains parameter ('Look up security.txt Contact entries'), specifying the data source ('319-domain snapshot'), and clarifying what type of data is returned (contact information for vulnerability disclosure). This provides valuable semantic understanding that complements the schema's technical specifications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Look up security.txt Contact entries') and resource ('from a static 319-domain snapshot'), distinguishing it from siblings like audit_contract (which audits contracts) and fetch_contract_source (which retrieves source code). It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Useful for clients that have just run audit_contract and now want to know where to disclose a finding'), provides context on its limitations ('No live HTTP fetches'), and distinguishes it from potential alternatives by specifying it uses a static snapshot rather than real-time data. This gives clear guidance on appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

probe_mcp_endpointAInspect

Run a bounded JSON-RPC conformance probe against another MCP server.

The input is a registry namespace/name string (e.g. "com.trycloudflare.candy-josh-writers-balance/yultrace-audit"); the tool looks up the registry-published remotes[].url and runs the probe against that URL. Arbitrary URLs are NOT accepted — targets are whitelisted to the official MCP Registry's isLatest+active entries.

The probe runs five bounded sub-tests (initialize, tools/list, malformed-body, unknown-method, oversized-params) under hard caps (10 HTTP requests; 30 s wall-clock; 500 KB per response; 50 KB final return size).

ParametersJSON Schema
NameRequiredDescriptionDefault
server_nameYesRegistry ``namespace/name`` of a currently-listed MCP server (must have ``isLatest=True`` + ``status=active``).

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It details the probe's five specific sub-tests, hard caps (10 HTTP requests, 30s wall-clock, 500KB per response, 50KB final return size), and operational constraints (whitelisted targets only). This provides rich behavioral context beyond what the input schema alone would convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted sentences. It begins with the core purpose, explains the input format and constraints, then details the probe's behavior and limitations. Each sentence adds essential information without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (testing another MCP server with multiple sub-tests and constraints), no annotations, and the presence of an output schema, the description provides complete contextual information. It explains what the tool does, how it works, its limitations, and operational parameters, making it fully understandable without needing to reference the output schema for basic comprehension.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100% for the single parameter, so the baseline would be 3. However, the description adds meaningful context about the parameter's purpose ('looks up the registry-published remotes[].url'), format requirements ('namespace/name string'), and validation constraints ('must have isLatest=True + status=active'), providing value beyond the schema's technical documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the specific action ('Run a bounded JSON-RPC conformance probe') and target resource ('against another MCP server'), clearly distinguishing it from sibling tools like audit_contract or fetch_contract_source. It provides concrete details about what the probe does rather than just restating the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for testing MCP server conformance) and when not to use it ('Arbitrary URLs are NOT accepted — targets are whitelisted to the official MCP Registry's isLatest+active entries'). It clearly defines the valid input scope and restrictions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

supported_chainsBInspect

Return the list of chain aliases the audit tool accepts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a list, implying a read-only operation, but doesn't clarify aspects like whether it requires authentication, has rate limits, or what format the output takes. While the output schema exists, the description lacks details on potential errors or side effects, leaving gaps in behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that directly states the tool's purpose without any unnecessary words. It is front-loaded and efficient, making it easy for an agent to parse quickly. Every part of the sentence contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no annotations, but with an output schema), the description is reasonably complete. It explains what the tool does, and since an output schema exists, it doesn't need to detail return values. However, it could improve by adding minimal context, such as why this list is useful or how it relates to sibling tools, to fully guide usage in a broader workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the input schema has 100% description coverage (though empty). The description doesn't need to explain parameters, as there are none, so it appropriately avoids redundancy. This meets the baseline for zero parameters, where no additional parameter information is required beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the list of chain aliases the audit tool accepts.' It specifies the verb ('return'), resource ('list of chain aliases'), and scope ('the audit tool accepts'), making the action unambiguous. However, it doesn't explicitly differentiate this from sibling tools like 'audit_contract' or 'fetch_contract_source', which might also involve chain-related operations, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or comparisons to sibling tools such as 'audit_contract' or 'list_security_contacts'. Without this, an agent might struggle to determine the appropriate scenario for invoking this tool, relying solely on the purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources