maker-mcp
Server Details
Maker - 10 tools for lending rates, supply, and borrow data
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/maker-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolscageCInspect
Calls cage(). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully identifies the operation as a write function with no return value and warns about access control modifiers. However, it omits critical behavioral context such as whether the operation is reversible, destructive, or what system state changes it triggers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four short sentences. The first sentence ('Calls cage().') wastes space with a tautology and could be removed. The remaining sentences are information-dense regarding access control and return values, but the opening waste prevents a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema and zero parameters (which are adequately addressed), the description is incomplete for a state-changing tool. It fails to explain the business logic or side effects of invoking cage(), which is critical context for an operation likely involving system-level state changes (e.g., freezing or sealing a contract).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the evaluation guidelines, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to describe beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description leads with 'Calls cage()', which is tautological and fails to explain what the cage operation actually accomplishes (e.g., emergency shutdown). While it identifies the tool as a 'Write function', this does not distinguish it from siblings like deny, rely, exit, or join, which are also state-changing operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite warning ('Check contract source for modifier requirements before calling') and notes access control restrictions (onlyOwner). However, it lacks explicit guidance on when to select this tool versus alternatives, offering only an implied safety check rather than selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
daiAInspect
Calls dai(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: 'Read-only — does not modify contract state' explains safety/idempotency, and 'Unrestricted' explains access control. It also notes the return type since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four efficient clauses with zero waste: function identification, mutability guarantee, access restriction, and return value. Information is front-loaded and every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple getter with no parameters, the description is nearly complete. It compensates for missing annotations by detailing mutability and access. However, it vaguely states 'Returns address' without specifying what address (e.g., the DAI token contract), which would be necessary for full completeness given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description does not need to compensate for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it calls dai() and returns an address, clarifying it's a read operation. While 'Calls dai()' is slightly tautological, the subsequent details about it being read-only and returning an address clearly establish this as a getter function, distinguishing it from state-modifying siblings like join/exit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the function is 'Unrestricted — any address can call this read function,' which provides implicit guidance on accessibility. However, it lacks explicit when-to-use guidance or comparisons against alternatives like vat or other registry functions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
denyBInspect
Calls deny(usr: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| usr | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description effectively discloses critical behavioral traits: it explicitly labels the operation as a 'Write function', warns about potential access control restrictions, and states 'No return value'. This gives the agent necessary context about mutation risks and output expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently structured: function signature first, classification second, risk warning third, return value last. No redundant words, though the opening 'Calls deny' could be more descriptive.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter write operation without output schema, the description covers essential safety concerns (access control restrictions) and return behavior. It appropriately delegates parameter specifics to the comprehensive schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the 'usr' parameter (Ethereum address). The description mentions 'usr: string' but adds no semantic context beyond the schema (e.g., that this is the address being denied/revoked). Baseline score appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a 'Write function' with 'access control restrictions' but opens with the tautological 'Calls deny(usr: string)'. It fails to explain what 'deny' semantically means (e.g., revoking authorization) or how it relates to the sibling tool 'rely', which is likely its functional opposite in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific safety guidance to 'Check contract source for modifier requirements before calling' and warns about access control (e.g., onlyOwner). However, it lacks explicit guidance on when to use this tool versus alternatives like 'rely' or prerequisites for successful invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
exitBInspect
Calls exit(usr: string, wad: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| usr | Yes | address (Ethereum address, 0x-prefixed). | |
| wad | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses that it is a write operation, mentions potential access control restrictions (e.g., onlyOwner), and states there is no return value. However, it omits what happens upon execution (e.g., token transfers, state changes) and error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with minimal waste; front-loads the function signature and immediately follows with critical safety warnings about access control. The first sentence is slightly redundant with the schema but establishes the call pattern efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a two-parameter tool with no output schema and no annotations, the description covers critical safety information (access control requirements) but fails to explain the domain-specific operation (what 'exit' means) or provide error handling context. Adequate but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (defining Ethereum address and uint256 formats), so the baseline is 3. The description repeats the parameter types in the function signature but adds no additional semantic context (e.g., that 'usr' is the beneficiary address or 'wad' is the token amount to withdraw).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the operation as a 'Write function' and mentions access control, but primarily restates the function name ('Calls exit') without explaining what 'exit' actually accomplishes in the domain context (e.g., withdrawing tokens/collateral from the contract).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a prerequisite warning to 'Check contract source for modifier requirements before calling,' implying safety concerns, but lacks explicit guidance on when to use this versus siblings like 'join' or 'cage' (likely inverse operations).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
joinBInspect
Calls join(usr: string, wad: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| usr | Yes | address (Ethereum address, 0x-prefixed). | |
| wad | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Correctly discloses 'Write function' mutation type and 'No return value', plus auth restrictions. However, omits state change details, event emissions, or failure modes that would be critical for a blockchain transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences total. First sentence wastes space repeating the function signature already evident in the schema. Remaining sentences efficiently convey auth warnings and return value status. No unnecessary fluff beyond the opening redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter write operation with no output schema: covers the critical auth warning. However, given the lack of annotations, it should explain the business logic of 'joining' and potential side effects (e.g., token transfers, state updates) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (usr as Ethereum address, wad as uint256). The description merely echoes parameter names and types without adding semantic context (e.g., what 'usr' represents, that 'wad' is likely an amount). Baseline 3 appropriate since schema does the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies as a 'Write function' which clarifies mutation, but 'Calls join(usr: string, wad: string)' tautologically restates the schema. Fails to explain what 'join' means in this domain (e.g., depositing collateral, joining a pool) or how it differs from sibling 'exit' or other write operations like 'rely'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite guidance ('Check contract source for modifier requirements') and warns about access control (onlyOwner), but lacks explicit when-to-use guidance versus alternatives like 'exit' or preconditions for successful execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
liveAInspect
Calls live(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint256.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses: (1) side effects ('does not modify contract state'), (2) access control ('any address can call'), and (3) return type ('Returns uint256'). This adequately covers the safety and permissions profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: function identification, side-effect declaration, permission level, and return type. Each sentence earns its place and critical information (read-only, unrestricted) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple zero-parameter read function without output schema, the description compensates by documenting the return type (uint256) and access characteristics. It is complete enough for invocation, though a brief explanation of what 'live' represents would improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description correctly does not invent parameter semantics where none exist, maintaining appropriate conciseness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it 'Calls live()' which is somewhat tautological, though it clarifies this is a read operation. However, it fails to explain what 'live' actually represents (e.g., contract status, activation flag) or how it differs semantically from siblings like 'cage' or 'vat' in the MakerDAO context implied by the tool names.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the function is 'Unrestricted — any address can call' and read-only, which implicitly distinguishes it from permissioned write operations like 'deny' or 'rely'. However, it lacks explicit guidance on when to use this versus other read functions or what business logic triggers its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
LogNoteCInspect
Event emitted by the contract. Indexed fields (filterable): sig, usr, arg1, arg2. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| sig | Yes | bytes4 (bytes4, hex string, 0x-prefixed) (indexed). | |
| usr | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| arg1 | Yes | bytes32 (32-byte hex string, 0x-prefixed) (indexed). | |
| arg2 | Yes | bytes32 (32-byte hex string, 0x-prefixed) (indexed). | |
| data | Yes | bytes (hex-encoded bytes, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description fails to disclose execution behavior: it doesn't specify whether subscription is persistent (websocket) or one-time polling, what the return format is, or whether this tool blocks. For an event-handling tool with 5 required parameters, the behavioral gap is significant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two brief sentences with no redundancy. However, the extreme brevity contributes to the ambiguity around the tool's purpose; it is concise but under-specified for a 5-parameter tool with no output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 required parameters, no output schema, and no annotations, the description should explain the subscription mechanism, return behavior, or data format. It leaves critical gaps: it doesn't clarify if 'data' is required for filtering, what happens when invoked, or how to handle the event stream. Incomplete for the complexity implied by the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, establishing a baseline of 3. The description adds minimal semantic value beyond the schema—it lists which fields are indexed/filterable, but the schema already indicates this with '(indexed)' tags in each property description. No additional context about valid ranges or relationships between parameters is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the subject ('Event emitted by the contract') and hints at the mechanism ('Subscribe via log filters'), but fails to clearly state the tool's action—whether it subscribes to events, queries historical logs, or filters real-time logs. The verb is missing, leaving the actual operation ambiguous despite mentioning the event type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus querying other events or using the sibling transaction tools (cage, join, exit, etc.). The phrase 'Subscribe via log filters' is the only usage hint, but lacks context on prerequisites or alternatives like 'getLogs' vs 'subscribe'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
relyBInspect
Calls rely(usr: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| usr | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it explicitly states 'Write function' (mutation), warns of 'access control restrictions,' and notes 'No return value.' This covers safety profile and output expectations adequately, though it omits side effects like emitted events.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact at four sentences, but front-loads with the low-value tautology 'Calls rely(usr: string).' The remaining content (write function, access control warning, no return value) earns its place, though the structure could prioritize the behavioral warning over the signature restatement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single parameter, no output schema, and no annotations, the description meets minimum viability by warning about access control and mutation. However, it lacks explanation of the function's semantic effect (what state changes occur, what 'rely' means in the protocol) and fails to reference the authorization model implied by sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('address (Ethereum address, 0x-prefixed)'). The description mentions 'usr: string' but does not add semantic meaning beyond the schema (e.g., what the address represents—likely a new authorized user). Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a 'Write function' with access control, but opens with the tautological 'Calls rely(usr: string)' that merely restates the tool name and parameter. It fails to explain what 'rely' actually accomplishes in the system (likely granting authorization/wardship given siblings like 'deny' and 'wards'), leaving the semantic purpose opaque.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description warns to 'Check contract source for modifier requirements before calling,' implying prerequisite checks are needed. However, it lacks explicit guidance on when to use this versus siblings (e.g., 'rely' vs 'deny') or what conditions must be met (e.g., caller must be an existing owner/ward).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vatBInspect
Calls vat(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given zero annotations. Explicitly states 'Read-only — does not modify contract state', 'Unrestricted — any address can call', and 'Returns address'. This effectively communicates the safety profile, permission requirements, and return type that would normally appear in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences total, but the opening 'Calls vat()' wastes space by restating the tool name without adding semantic value. The remaining sentences efficiently convey read-only status, permission requirements, and return type. Front-loading is suboptimal due to the tautological opening.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read operation, covering safety and return type. However, it fails to clarify what the returned address represents (the vat contract address?) or provide context about the vat's role in the system. Without an output schema, the semantic meaning of the returned address remains ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, which per scoring rules establishes a baseline score of 4. No parameter documentation is required or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it calls 'vat()' and returns an address, which provides some functional context. However, 'Calls vat()' is tautological (restating the tool name), and it fails to explain what 'vat' represents (likely the core vault engine address) or distinguish this from sibling address-returning getters like 'dai' or 'wards'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. Given siblings like 'join', 'exit', and 'cage' exist in what appears to be a MakerDAO-style contract suite, the description should indicate when retrieving the vat address is necessary (e.g., before composing transactions with other vault operations).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wardsCInspect
Calls wards(param0: string). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint256.
| Name | Required | Description | Default |
|---|---|---|---|
| param0 | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses read-only safety and unrestricted access. However, it only notes 'Returns uint256' without explaining what the value represents (typically 0 or 1 indicating authorization status), leaving a critical behavioral gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but front-loaded with a redundant function signature ('Calls wards(param0: string)') that duplicates structured data. The remaining sentences efficiently convey read-only and unrestricted properties, though the return value statement lacks substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the simple single-parameter input, the absence of an output schema means the description must explain return values, which it fails to do semantically (uint256 meaning is unclear). It also omits domain context about what constitutes a 'ward', leaving users without sufficient information to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with param0 fully documented as an Ethereum address. The description mentions 'param0: string' but does not add semantic meaning beyond the schema (e.g., whose address to check, or format validation). Baseline 3 is appropriate since the schema carries the load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the operation as read-only and mentions the function name 'wards', but fails to explain what 'wards' means in this domain context (authorization/permission checking in MakerDAO). The opening 'Calls wards(param0: string)' is tautological and wastes space stating what the schema already defines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It distinguishes this tool from write operations (siblings like 'rely', 'deny', 'cage') by stating 'Read-only' and 'Unrestricted', indicating it can be called by any address without gas costs or state changes. However, it lacks specific guidance on when to check wards (e.g., before authorizing transactions) or how to interpret the results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!