Skip to main content
Glama

Server Details

Encrypted A2A object storage for autonomous agent state and artifacts

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-agent-storage
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

All 8 tools have clearly distinct purposes. The storage tools (delete, get, list, put, quota) each target a unique action on agent storage, and the earn tools (leaderboard, me, register) address separate aspects of the earn program. There is no overlap or ambiguity between tools.

Naming Consistency5/5

Tool names follow a consistent prefix_action pattern: 'agent_storage_' for storage operations and 'hive_earn_' for earn operations. The action verbs are clear and uniform (delete, get, list, put, quota; leaderboard, me, register). No mixing of naming conventions.

Tool Count5/5

With 8 tools, the set is well-scoped for a server covering agent storage and earn functionality. The storage tools cover essential CRUD plus quota, and the earn tools cover registration, personal info, and leaderboard. No tool feels extraneous or missing.

Completeness5/5

The tools provide a complete surface for the stated domain: storage includes create, read, delete, list, and quota; earn includes register, view own status, and leaderboard. For object storage, update is not typically needed (put overwrites), and no obvious gaps are present. The tool set covers the expected lifecycle.

Available Tools

8 tools
agent_storage_deleteA
DestructiveIdempotent
Inspect

Delete an object from agent storage. Owner-only — only the agent_did that owns the namespace can delete. Free. Tombstoned with a chain-attested receipt; cold-tier (Arweave permanent) objects are unlinked from the namespace but retain on-chain. Backend pending — currently returns 503.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesObject key to delete.
api_keyYesOwner API key. Required for delete.
agent_didYesOwner agent DID. Must match the namespace owner.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide destructive and idempotent hints. The description adds details beyond annotations: tombstoning with chain-attested receipt, cold-tier unlink behavior, and 503 pending. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the action. Every sentence adds essential information with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains outcomes like tombstoning and cold-tier behavior. It lacks mention of irreversibility or receipt details, but overall covers key aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds value by clarifying that api_key is the owner's key and agent_did must match the namespace owner, which enhances schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Delete an object from agent storage,' clearly identifying the verb (delete) and resource (object in agent storage). It distinguishes from sibling tools like get, list, and put by focusing on deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: owner-only access, free, and backend pending status. It implicitly guides who can use it but does not explicitly state when not to use or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_storage_getA
Read-onlyIdempotent
Inspect

Read an object from agent-isolated storage. Free for own DID; cross-DID reads cost $0.00005/KB in real Base USDC (settled per KB read). Returns the object bytes + storage receipt + chain attestation. Backend pending — currently returns 503.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesObject key inside the agent namespace.
api_keyNoCaller API key.
agent_didYesOwner agent DID for the object namespace being read.
caller_didNoCalling agent DID. If different from agent_did, cross-DID read pricing applies.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnly, idempotent, and non-destructive. The description adds significant behavioral context: pricing details, return payload (object bytes + storage receipt + chain attestation), and a backend readiness note. This goes beyond annotations and fully informs the agent of behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, extremely concise, and front-loads the core purpose. Every sentence adds value (purpose, cost, return, status). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with no output schema, the description covers purpose, cost, return values, and current availability. It could optionally mention size limits or concurrency, but these are not critical. The completeness is high given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description does not add additional parameter-level semantics beyond what the schema provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Read an object from agent-isolated storage' with a specific verb (Read) and resource (object from agent-isolated storage). This distinguishes it from sibling tools like agent_storage_put (write) and agent_storage_delete (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit cost guidance ('Free for own DID; cross-DID reads cost $0.00005/KB') and notes current backend status ('currently returns 503'). However, it does not explicitly state alternatives or when to prefer this over other storage read methods, but the cost differentiation serves as a usage hint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_storage_listA
Read-onlyIdempotent
Inspect

List objects inside an agent storage namespace. Free read. Supports key prefix filtering and pagination cursor. Backend pending — currently returns 503.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum keys to return. Default 100, max 1000.
cursorNoPagination cursor returned from a prior list call.
prefixNoOptional key prefix filter (e.g. "memory/").
api_keyNoCaller API key.
agent_didYesAgent DID whose namespace to list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds beyond annotations: describes it as a free read and warns 'Backend pending — currently returns 503.' Annotations already declare readOnlyHint and idempotentHint. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first sentence states purpose, second lists features and a critical warning. Front-loaded, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, features, and a current limitation (503 error). However, lacks description of return format or pagination response structure, though annotations cover safety. Could be more complete for a list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. Description references 'key prefix filtering' and 'pagination cursor', which align with prefix and cursor params, adding slight context but no substantial new meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'List objects inside an agent storage namespace', specifying the verb 'List' and the resource 'agent storage namespace'. Distinguishes from sibling tools like agent_storage_delete and agent_storage_get.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'Free read' and lists features like key prefix filtering and pagination cursor. Implicitly guides when to use (listing under a namespace), but does not explicitly state when not to use or provide alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_storage_putAInspect

Upload bytes to agent-isolated object storage. Per-agent DID isolation: only the owner DID can read/write its namespace by default. Settles in real Base USDC at $0.0001/KB on upload. Routes to Storj, Filecoin, or Arweave under the hood (chosen by retention class). Returns content-addressed object key + storage receipt with chain attestation. Backend pending — currently returns 503.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesObject key inside the agent namespace (e.g. "memory/2026-04-27.jsonl"). Must not start with "/".
api_keyNoAgent API key for authentication. Optional if agent_did is registered with HiveGate.
contentYesObject content. Plain text or base64 (set content_encoding accordingly).
agent_didYesOwner agent DID (did:hive:... or did:web:...). Defines the storage namespace.
content_typeNoOptional MIME type (e.g. "application/json", "image/png").
retention_classNoStorage backend hint: "hot" (Storj), "warm" (Filecoin), "cold" (Arweave permanent). Default "hot".
content_encodingNoOne of "utf8" or "base64". Default "utf8".
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotation contradiction. The description adds significant behavioral context including pricing ($0.0001/KB), backend routing (Storj/Filecoin/Arweave), return value details (content-addressed key + receipt), and a crucial current limitation ('currently returns 503').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is five sentences, front-loaded with the main purpose. Each sentence provides distinct information (purpose, isolation, pricing, routing, return value, backend status) without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter tool with no output schema, the description covers purpose, privacy, pricing, routing, return type, and current status. It could mention authentication failure handling or size limits, but the 'Backend pending' note makes the incomplete state transparent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds value by mapping retention_class values to specific backends (hot->Storj, warm->Filecoin, cold->Arweave) and explaining the default content_encoding, which is not in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Upload bytes to agent-isolated object storage' with a specific verb and resource. It distinguishes itself from siblings like agent_storage_get, agent_storage_delete, etc., by emphasizing the upload action and per-agent DID isolation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool (uploading data for a specific agent) and mentions pricing and backend routing. However, it lacks explicit alternatives or when-not-to-use scenarios, though the sibling tools are sufficiently differentiated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent_storage_quotaA
Read-onlyIdempotent
Inspect

Return the current quota usage for an agent namespace: bytes used, bytes allocated, object count, retention-class breakdown, and lifetime USDC spent on this namespace. Free read. Backend pending — currently returns 503.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoCaller API key.
agent_didYesAgent DID whose quota to inspect.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, idempotentHint, destructiveHint), the description adds critical behavioral context: the tool is a read-only, free operation and currently returns a 503 status due to backend issues. It also discloses the full set of return fields (bytes, object count, retention breakdown, USDC), compensating for the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no unnecessary words. The first sentence efficiently lists the return data, and the second sentence covers two critical points: cost and status. Every phrase earns its place; no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, read-only tool with a single required parameter and no output schema, the description provides all necessary context: what the tool does, what data it returns, that it is free and safe, and its current malfunction status. It also mentions 'retention-class breakdown' and 'lifetime USDC,' which are specific and useful. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes both parameters (api_key, agent_did) with 100% coverage. The description adds no additional meaning or context about how to use these parameters; it merely references 'agent namespace' which aligns with agent_did. Thus it meets the baseline without surplus value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Return the current quota usage for an agent namespace' and lists the specific metrics (bytes used, allocated, object count, retention-class breakdown, USDC spent). It clearly differentiates from sibling storage tools like agent_storage_get/list/put/delete which deal with storage content, not quota.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description labels the tool as 'Free read,' implying safe, non-destructive use. It also warns 'Backend pending — currently returns 503,' which suggests it is not functional at present. However, it does not explicitly state when to use this tool versus alternatives like agent_storage_list or hive_earn_me, nor does it provide conditions or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_leaderboardAInspect

Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoTime window. One of: "7d", "30d", "lifetime". Default "7d".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions the GET endpoint (read-only), and graceful error handling, but does not disclose auth requirements, rate limits, or other behavioral traits. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with key information, and every sentence adds value (purpose, sort criteria, endpoint, error handling). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and single parameter, the description covers purpose and endpoint but omits response structure (e.g., fields returned). For a leaderboard, fields like rank, address, payout would be expected. Somewhat incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the window parameter fully described (enum, default). The description references the parameter in the endpoint URL but adds no additional semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns top earning agents on Hive Civilization by attribution payout in USDC, with a specific endpoint and handling for unavailable upstream. It distinguishes from sibling tools like hive_earn_me which focus on individual earnings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a leaderboard, but does not explicitly contrast with siblings or provide when-not-to-use guidance. Usage is clear but lacks explicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_meAInspect

Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up. Required.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes the HTTP GET method, the API endpoint with agent_did parameter, graceful error handling for 'rails not yet live', and indicates read-only behavior. It does not disclose potential side effects, but none are expected for a GET endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three sentences, no redundant information. Each sentence adds value: purpose, data specifics, endpoint, error handling. Front-loaded with core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return values (balance, payout tx hash, ETA) and error case. It mentions the endpoint and data authenticity. Could be improved by clarifying input restrictions (e.g., agent_did must be the caller's), but current coverage is adequate for a simple lookup.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for agent_did. The description adds the URL pattern but does not provide additional constraints or format beyond the schema. Baseline is 3 as schema already documents the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'Look up' and the resource 'caller agent's registered earn profile', listing specific data fields. It distinguishes from sibling tools like hive_earn_leaderboard by focusing on the individual agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking one's own earnings but does not explicitly state when to use this tool versus alternatives like hive_earn_leaderboard or hive_earn_register. It provides no when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_registerAInspect

Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesCaller agent DID (e.g. did:hive:0x… or did:web:…). Required.
payout_addressYesBase L2 EVM address (0x…) to receive USDC kickback payouts.
attribution_urlYesPublic URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It reveals key behaviors: calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller, resilient to cold-start returning a structured body, and details settlement (Base USDC), kickback (5%), and payout frequency (weekly). This goes beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with five focused sentences: purpose, settlement details, financial terms, technical endpoint, and error handling. Every sentence adds value, and the structure is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with no output schema and three well-documented parameters, the description provides robust context: purpose, financial incentives, exact endpoint, and an edge case (cold-start). It sufficiently answers what the agent needs to know to use this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters have descriptions). The description adds context about the endpoint but does not deepen meaning for individual parameters beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Register') and resource ('agent for the Hive Civilization attribution payout program'). It clearly states the program context and distinguishes itself from sibling tools like hive_earn_leaderboard and hive_earn_me, which serve different purposes (leaderboard and personal info).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly tells the agent to use this tool to register for the payout program. It does not explicitly contrast with alternatives, but sibling tools are on different domains (storage, leaderboard, personal info), so the context is sufficient. Adding explicit when-to-use guidance would elevate it to a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.