compound-mcp
Server Details
Compound - 8 tools for lending rates, supply, and borrow data
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/compound-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 8 of 8 tools scored. Lowest: 2.7/5.
The tools have overlapping purposes that could cause confusion, particularly between 'admin' and 'implementation' as read functions, and 'changeAdmin', 'upgradeTo', and 'upgradeToAndCall' as write functions with similar upgrade-related actions. However, descriptions help clarify their specific roles, such as distinguishing event subscriptions from function calls.
Naming is inconsistent with a mix of styles: 'admin' and 'implementation' use lowercase nouns, 'changeAdmin' and 'upgradeTo' use camelCase verbs, and event tools like 'AdminChanged' use PascalCase. This lack of a predictable pattern makes the set harder to navigate and understand at a glance.
With 8 tools, the count is reasonable for managing a smart contract's admin and upgrade functions, covering read operations, write operations, and event subscriptions. It feels slightly heavy but appropriate for the domain, as it includes both core functions and event-handling tools.
The tool set provides good coverage for smart contract upgrade and admin management, including read (admin, implementation), write (changeAdmin, upgradeTo, upgradeToAndCall), and event subscription (AdminChanged, BeaconUpgraded, Upgraded) functions. Minor gaps might exist, such as lack of tools for checking access control status or handling errors, but core workflows are well-supported.
Available Tools
8 toolsadminCInspect
Calls admin(). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. Returns admin_ (address).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. The 'Write function' classification is likely misleading (this appears to be a view function), though it correctly notes access control restrictions and the return type.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences that are mostly efficient. The 'Calls admin()' fragment wastes space, but the access control warning and return value specification earn their place. Not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by specifying the return type (address), and mentions access control relevant to admin functions. However, the misleading 'Write function' label and lack of proxy pattern context leave gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters with 100% schema coverage (vacuously). The description appropriately implies no arguments are needed by stating simply 'Calls admin()'. Baseline 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it returns 'admin_ (address)' which clarifies the resource, but 'Calls admin()' is tautological. The classification as a 'Write function' is confusing given it returns a value and likely serves as a getter (contrasted with sibling changeAdmin).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite guidance to 'Check contract source for modifier requirements,' but fails to distinguish when to use this getter versus the sibling 'changeAdmin' setter or other proxy-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
AdminChangedCInspect
Event emitted by the contract. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newAdmin | Yes | address (Ethereum address, 0x-prefixed). | |
| previousAdmin | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions subscription but lacks critical behavioral details: what the return format is, whether this is a persistent subscription or one-time poll, permissions required, or what triggers the event emission.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is appropriately terse and front-loaded. Every sentence serves a purpose (identifying the tool type and usage mechanism), though greater specificity would improve clarity without hurting conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should better explain the event semantics (what constitutes an admin change) and subscription behavior. The 100% schema parameter coverage helps, but the description leaves gaps in explaining the event lifecycle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (both previousAdmin and newAdmin documented as Ethereum addresses), the schema fully carries the parameter semantics. The description adds no parameter information, which is acceptable given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event and mentions subscription via log filters, but 'Event emitted by the contract' is vague. It fails to clearly distinguish this listener tool from the sibling action tool 'changeAdmin' or explicitly state this monitors admin ownership transfers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'Subscribe via log filters' indicates the mechanism, there is no guidance on when to use this versus siblings like 'changeAdmin' (action) versus this event subscription, nor any prerequisites for setting up log filters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
BeaconUpgradedCInspect
Event emitted by the contract. Indexed fields (filterable): beacon. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| beacon | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states this is an emitted event but fails to describe what happens upon invocation (streaming vs polling? return format? historical vs real-time?), rate limits, or read-only safety implications inherent to log subscriptions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with no redundancy. Information is front-loaded with the event type declaration. Could be more informative rather than just terse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a blockchain event tool, lacks explanation of the beacon proxy pattern significance (when this event fires, what it signifies regarding upgrades). No output schema exists, yet description doesn't hint at return structure or log format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage. The description reinforces that 'beacon' is indexed and filterable, adding slight semantic value, but doesn't explain what a beacon address represents in the proxy pattern or provide format examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies this as an event subscription tool for contract logs, specifically mentioning the 'beacon' indexed field which distinguishes it from sibling event tools like 'Upgraded' or 'AdminChanged'. However, it doesn't explicitly contrast with these siblings to clarify when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Subscribe via log filters' indicating the mechanism, but provides no guidance on when to use this versus the similar 'Upgraded' event, nor prerequisites like contract address requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
changeAdminAInspect
Calls changeAdmin(newAdmin: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| newAdmin | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden effectively. It discloses the write/mutating nature, warns about access control modifiers, and explicitly states 'No return value'. It omits mention of side effects like emitting the 'AdminChanged' event (visible in sibling list), but covers the critical safety aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tight sentences with zero waste. Front-loads the function signature, immediately follows with write/access-control warnings, then prerequisites, then output behavior. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single parameter, no output schema), the description is nearly complete. It covers the critical blockchain-specific context (access control modifiers, no return value). Could be elevated to 5 by mentioning the 'AdminChanged' event emission given the sibling tool name suggests this side effect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'newAdmin' well-described as an Ethereum address. The description mentions 'newAdmin: string' but adds no semantic detail beyond the schema (e.g., no mention of valid address formats or checksums). Baseline 3 is appropriate when schema documentation is already comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the tool as a 'Write function' that changes the admin, distinguishing it from the read-only 'admin' sibling and event 'AdminChanged'. The function signature reinforces the specific action, though it leans slightly on the tool name rather than fully describing the conceptual operation (transferring administrative rights).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit safety guidance: 'Check contract source for modifier requirements before calling' and warns of 'access control restrictions (e.g. onlyOwner)'. This establishes prerequisites (must have owner privileges) but does not explicitly reference the sibling 'admin' tool as the read alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
implementationBInspect
Calls implementation(). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. Returns implementation_ (address).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully warns about access control restrictions (onlyOwner), but the 'Write function' claim is unsupported regarding what state changes occur, and it omits revert conditions or gas considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively brief at four sentences, but the unexplained 'Write function' assertion creates confusion that undermines the value of the conciseness; an additional clarifying phrase about side effects or transaction requirements would improve utility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description correctly notes the return type (address). However, with no annotations and unclear 'Write function' classification, the description should do more to clarify whether this operation changes state or is simply an authorized read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately does not discuss parameters since none exist, and no compensation for missing schema documentation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the action (calls implementation()) and return value (address), but the 'Write function' label is confusing and potentially inaccurate for what appears to be a getter function in standard proxy patterns, creating ambiguity about the tool's actual behavior versus siblings like upgradeTo.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides some guidance by warning to check contract source for modifier requirements, but fails to explain when to use this tool versus siblings like upgradeTo or admin (e.g., for reading current implementation vs changing it), or prerequisites for successful invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
UpgradedBInspect
Event emitted by the contract. Indexed fields (filterable): implementation. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| implementation | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully clarifies this is an event to subscribe to rather than a function to invoke, and notes the filterable nature of logs. However, it omits what triggers this event (successful proxy upgrades) and lacks safety or side-effect disclosure typical for upgrade-related operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: identifies the event type, highlights the indexed/filterable parameter characteristic, and states the subscription mechanism. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter event tool without output schema, the description covers basic necessities (event nature, subscription method). However, it lacks context about the proxy upgrade pattern (ERC-1967) that would help agents understand when this event fires and why the implementation address matters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for the single 'implementation' parameter. The description adds value by explicitly labeling this as an 'Indexed field (filterable)', which reinforces how to use it in log filter subscriptions—a semantic layer beyond the raw schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as an 'Event emitted by the contract' with an indexed implementation field, distinguishing it from callable sibling functions like upgradeTo. However, it fails to differentiate from the similar sibling event 'BeaconUpgraded' or clarify this relates to proxy contract patterns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description notes to 'Subscribe via log filters' (indicating it's for listening), it provides no guidance on when to use this event subscription versus the callable upgradeTo function, nor when to prefer this over the BeaconUpgraded event sibling. No alternatives or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upgradeToAInspect
Calls upgradeTo(newImplementation: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| newImplementation | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden effectively. It explicitly labels this as a 'Write function,' warns about access control modifiers, and states 'No return value.' This covers the essential behavioral traits (mutation, auth requirements, void return) but misses secondary behaviors like emitted events (relevant given 'Upgraded' sibling) or irreversibility of the upgrade.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences that are front-loaded with the function signature, followed by critical warnings, and ending with return value information. There is no redundant or wasted text; every clause serves a distinct purpose (identification, authorization warning, prerequisite check, return disclosure).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a single-parameter mutation tool with no output schema, the description appropriately covers the essential gaps: it identifies the void return (compensating for missing output schema), warns about write-side effects, and mentions authorization requirements. It could be improved by noting this is specifically for UUPS/Transparent proxy patterns (evident from sibling tools like 'BeaconUpgraded'), but remains sufficient for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('address (Ethereum address, 0x-prefixed)'), establishing a baseline of 3. The description references the parameter in the function signature ('newImplementation: string') but adds no additional semantic context—such as validating the address is a deployed contract or explaining it represents the new logic contract address—beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Calls upgradeTo(newImplementation: string)' identifying the specific function and parameter. However, it assumes familiarity with proxy patterns rather than explicitly stating this upgrades a contract's implementation address. It distinguishes itself as a direct call (vs. the sibling 'upgradeToAndCall') through the signature notation, though it doesn't explain the proxy upgrade concept explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description warns about access control restrictions ('may have access control restrictions (e.g. onlyOwner)') and advises checking 'contract source for modifier requirements before calling,' which provides important prerequisite guidance. However, it fails to distinguish when to use this tool versus the sibling 'upgradeToAndCall'—a critical distinction in proxy patterns where the latter also initializes the new implementation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upgradeToAndCallAInspect
Calls upgradeToAndCall(newImplementation: string, data: string). Requires ETH value to be sent with the transaction. Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | bytes (hex-encoded bytes, 0x-prefixed). | |
| newImplementation | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure despite no annotations. Covers: mutability (write function), financial requirements (ETH value), permission model (access control restrictions), and output behavior (no return value). This compensates fully for missing structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences with zero waste. Critical information (ETH requirement, access control, return value) is front-loaded and logically ordered. No redundant phrases or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of proxy upgrades and absence of output schema/annotations, the description covers essential safety and behavioral aspects. Could briefly mention this is for UUPS/transparent proxy patterns to ground the agent, but the sibling tools provide sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (address and bytes types documented). The description mentions parameters in the function signature but adds minimal semantic context beyond the schema (e.g., doesn't explain that 'data' is typically for initialization calldata). Baseline 3 appropriate when schema carries the load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a tautological function signature reference but quickly clarifies this is a state-changing upgrade operation. It distinguishes from sibling 'upgradeTo' via the 'data' parameter and ETH requirement, though it could explicitly state it upgrades proxy implementations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical safety constraints: ETH value requirement, access control warnings (onlyOwner), and prerequisites (check contract source). Could explicitly contrast with 'upgradeTo' for cases where ETH/call data aren't needed, but the ETH requirement strongly signals the differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!