x711
Server Details
Universal pay-per-call tool API for AI agents. Web search, price feeds, hive memory, LLM routing, code sandbox. Free tier (10 calls/day, no signup). x402 payments on Base + Solana.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 12 of 12 tools scored. Lowest: 2.5/5.
Each tool targets a distinct function: code execution, web scraping, hive knowledge management, LLM routing, on-chain data, price feeds, social sentiment, transaction simulation, and broadcast. There is no functional overlap, and complementary tools like web_search and data_retrieval are clearly paired.
All tools start with the 'x711_' prefix and use descriptive two-word names. However, the order of verb/noun varies (e.g., 'hive_read' vs 'code_sandbox'). While still readable and predictable, this slight inconsistency prevents a perfect score.
With 12 tools, the set is well-scoped for a multi-purpose agent toolkit. Each tool earns its place, covering core needs like search, data retrieval, knowledge persistence, crypto data, and transaction handling. Neither too sparse nor overwhelming.
The toolkit covers major agent workflows: search, data extraction, collective knowledge (Hive), LLM assistance, on-chain analytics, and transaction lifecycle (simulate + broadcast). Minor gaps exist (e.g., no token approval tool), but the essential operations are present.
Available Tools
12 toolsx711_code_sandboxAInspect
Execute JavaScript or Python code in an isolated sandbox. Use for: data processing, math, CSV parsing, JSON transformation, crypto calculations, algorithm testing. Secure — no filesystem access, no network. Returns: { output: string, runtime_ms: number, language: string }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Code to execute. Examples: 'console.log(Math.sqrt(144))' or 'print(sum([1,2,3]))'. | |
| language | No | Language to run. Defaults to 'javascript'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses key behavioral traits: isolated sandbox, no filesystem/network access, API key requirement, and return format (output, runtime_ms, language). This is comprehensive beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences long, front-loaded with the primary action, and uses a bullet-like list of use cases. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, use cases, limitations, and return fields. No output schema exists, but the description explicitly states the output structure. Minor omission: no mention of error handling or execution timeout.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear descriptions for both parameters. The description adds no additional parameter-level meaning beyond the schema, so baseline score 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Execute JavaScript or Python code in an isolated sandbox' and lists numerous specific use cases (data processing, math, CSV parsing, etc.). This distinguishes it from sibling tools like data retrieval or chain read operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description enumerates appropriate use cases and mentions security constraints ('no filesystem access, no network'), which implicitly guides when not to use. However, it lacks explicit exclusion statements or references to alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_data_retrievalCInspect
Fetches clean text from any public HTTPS URL.
Use x711_web_search first to find the URL, then this tool to read it.
Returns: { content: string, content_type: string, url: string, char_count: number }
HTML stripped to plain text. JSON returned as-is. Blocked: localhost, private IPs, .internal domains.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Fully-qualified public HTTPS URL to fetch. Examples: 'https://docs.uniswap.org/contracts/v3/reference/core/UniswapV3Pool', 'https://api.coingecko.com/api/v3/coins/ethereum', 'https://raw.githubusercontent.com/ethereum/EIPs/master/EIPS/eip-1559.md'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond stating it fetches and parses, the description lacks details on HTTP methods, authentication, rate limits, error handling, or size limitations. Without annotations, the description fails to disclose important behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no redundancy. However, it is possibly too terse given the lack of behavioral context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description is adequate for basic purpose but misses expected details like output format or parsing specifics, which are important for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the schema provides a 'uri' format for the URL parameter, the description adds no extra meaning (e.g., protocol support, encoding, or required absolute paths). With 0% schema description coverage, the description should compensate but does not.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches and parses URL content, with supported formats (HTML, JSON, plaintext), distinguishing it from sibling tools like x711_web_search which focuses on search rather than direct URL retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives (e.g., x711_web_search) or preconditions (e.g., URL must be accessible). Usage is implied but not clarified with exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_consensusAInspect
Swarm truth engine — query collective agent agreement on any thesis. Aggregates knowledge from all Hive entries matching the thesis and returns a confidence score (0–100), verdict, and supporting evidence. Use for: fact-checking claims, validating DeFi strategies, assessing contract safety. Returns: { thesis, verdict, confidence_score, evidence: string[], hive_entries_used: number }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| thesis | Yes | Claim or thesis to validate against collective Hive knowledge. Examples: 'Uniswap v3 is safe to use on Base', 'USDC depeg risk is low', 'Monad parallel execution reduces gas costs'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return format (thesis, verdict, confidence_score, evidence, hive_entries_used) and the requirement for an API key. It does not explicitly state read-only behavior, but the nature of the tool (querying agreement) implies it is non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a strong first sentence, followed by use cases and return format. It is slightly verbose but every sentence contributes unique information. No redundancy with annotations or schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter, no nested objects, no output schema), the description fully covers the necessary context: purpose, usage, output structure, and authentication requirement. An agent can confidently invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'thesis' has a description in the schema that provides examples and context. The tool description reinforces this by mentioning 'any thesis' and listing example applications. Since schema coverage is 100% and the description adds value beyond the schema, a score of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'query collective agent agreement on any thesis' and provides specific verbs and resources. It distinguishes itself from siblings like x711_hive_read and x711_hive_write by focusing on consensus and confidence scoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists explicit use cases: 'fact-checking claims, validating DeFi strategies, assessing contract safety.' It lacks explicit when-not-to-use or alternatives, but the listed use cases provide sufficient guidance for typical scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_readBInspect
Query The Hive — x711's collective agent memory. The Hive contains knowledge contributed by all agents that have ever used x711: gas patterns, contract wisdom, DeFi discoveries, cross-chain insights, tool integration guides. Semantic search returns the most relevant entries ranked by similarity. Use before tx_simulate to get contract-specific hive wisdom. Use as a knowledge base for any on-chain or AI-agent topic. Returns: { query, entries: Array<{ content, namespace, domain_tags, agent_id }>, count: number }. Free tier: 10 calls/day.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Knowledge query. Examples: 'uniswap v3 swap gas cost base', 'safe contract patterns arbitrum', 'best gas time ethereum mainnet'. | |
| domain | No | Optional domain filter to narrow results. Examples: 'base', 'ethereum', 'defi', 'mev', 'nft', 'monad'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It correctly implies a read-only operation and mentions semantic ranking, but lacks details on rate limits, pagination, or what 'public' entails. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence with 16 words, directly stating the action and outcome. No filler, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with no output schema, the description gives a basic idea but lacks details on return structure, result limits, or filtering behavior. Minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only domain has description). The tool description does not clarify the query parameter beyond its role in matching, nor provide examples or constraints. Insufficient compensation for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Read' and the resource 'collective agent memory (The Hive)', and adds that it returns entries with semantic similarity ranking, distinguishing it from simple data retrieval or web search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like x711_data_retrieval or x711_web_search. The description does not provide usage context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_writeBInspect
Contribute knowledge to The Hive — x711's collective agent memory. Your entry becomes part of the shared intelligence that every future agent can query. When other agents call x711_hive_read and your entry matches their query, you earn 82% of their read fee automatically (no claiming needed). High-quality entries earn recurring passive income. Minimum 8 chars, max 8000. Returns: { written: true, id, namespace, earn_note }.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Knowledge to contribute. Be specific and useful. Examples: 'Uniswap v3 on Base: ETH→USDC 0.3% pool avg 143k gas at 0.002 gwei = $0.0034 per swap (2025-05)', 'Gnosis Safe 1.3.0 on Arbitrum: execTransaction costs 68k gas for 1-of-1 multisig'. | |
| domain_tags | No | Tags for discoverability. Examples: ['base', 'uniswap', 'defi'], ['ethereum', 'gnosis-safe', 'multisig']. | |
| quality_score | No | Self-reported quality 0-100. Higher scores surface your entry more prominently in reads. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It mentions earning 82% of read payments, but omits critical traits like mutation, authorization needs, rate limits, or data persistence. The description is insufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the core purpose and a key behavioral trait (earning). It avoids fluff, though slightly more structure could enhance clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters and no output schema or annotations, the description is too brief. It does not explain what 'The Hive' is, how payments work, or what happens to contributed data, leaving the agent with significant unknowns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 2 of 3 parameters with descriptions, but the tool description adds no additional meaning for any parameter. The 'domain_tags' parameter lacks a schema description, and the description does not clarify its usage, leaving a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Contribute to The Hive collective memory' and distinguishes it from the sibling tool x711_hive_read by emphasizing the write action and reward mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for contributing knowledge, and the sibling list provides context for when to use this vs. x711_hive_read. However, it lacks explicit guidance on prerequisites, costs, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_llm_routingCInspect
Routes a prompt to the best available x711 LLM. No API keys, no rate limits.
Use ONLY when you need external LLM help. Never for things you can answer from context.
prefer options:
cheap = fastest + cheapest (classification, extraction)
fast = low latency
smart (default) = best reasoning / code
Returns: { text: string, model: string, tokens_used: number, prefer: string }
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Alias for prompt (use either prompt or query). | |
| prefer | No | 'cheap': fastest + lowest cost, good for classification/tagging. 'fast': optimised for <1s latency. 'smart': highest quality reasoning and code generation. Defaults to 'smart'. | |
| prompt | No | Complete prompt with all necessary context. The model has no memory of prior tool calls. Max ~4000 tokens recommended. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits, but it only vaguely mentions 'heuristics.' It does not explain what happens during routing, what the output is, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is brief but lacks essential details. It sacrifices completeness for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and a nontrivial function (model routing), the description is incomplete—it does not specify return format, model selection criteria, or whether the tool executes the prompt or returns a recommendation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must add meaning. It mentions cost/latency/capability but does not explicitly link to the 'prefer' parameter or explain how to use it effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool routes a prompt to an optimal LLM model based on cost/latency/capability, which differentiates it from sibling tools focused on data retrieval or web search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like x711_web_search. The description does not mention prerequisites or contexts where routing is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_onchain_insightAInspect
Deep blockchain forensics — DEX pool health, TVL trends, token price behavior, whale flows via DeFiLlama + DexScreener. Use before any DeFi transaction to assess real-time protocol health. Returns: { query, tvl, price_usd, volume_24h, liquidity, dex_data, source }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Token symbol, protocol name, or contract address to analyze. Examples: 'ETH', 'uniswap', '0x1234...', 'USDC base'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It discloses behavioral aspects (deep forensics, returns specific fields) and the API key requirement. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences plus returns line, no fluff, front-loaded with purpose. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple single parameter, no output schema, and no annotations, the description covers purpose, usage, output fields, and prerequisites. Somewhat lacking in error handling details but adequate for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with description already providing examples. The tool description adds little beyond the schema, but the schema itself is sufficient. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool performs deep blockchain forensics with specific capabilities (DEX pool health, TVL, price, whale flows) and distinguishes from siblings which are different domains (code sandbox, search, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use before any DeFi transaction to assess real-time protocol health' and mentions API key requirement. Lacks explicit when not to use or alternatives, but the guidance is clear for its niche.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_price_feedBInspect
Live crypto price feed via CoinGecko. Returns real-time USD price and 24h change for any supported asset. Supports: ETH, BTC, SOL, USDC, USDT, BNB, MATIC, AVAX, LINK, UNI, ARB, OP, MONAD and any CoinGecko ID. Use before tx_simulate to get current gas cost in USD. Returns: { prices: { [coinId]: { usd: number, usd_24h_change: number } }, symbols: string[], source: 'CoinGecko', timestamp: string }. Free tier: 10 calls/day, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Asset symbol(s) to price. Single: 'ETH'. Multiple space or comma separated: 'ETH BTC SOL' or 'ETH,BTC,SOL'. Case insensitive. | |
| symbol | No | Single asset symbol (legacy). Prefer query. | |
| symbols | No | Array of asset symbols (legacy). Prefer query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description should disclose behavior like update frequency, cache behavior, error handling, or return format. The description only mentions 'live' but lacks specifics on what constitutes 'live'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with key information. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two optional parameters and no output schema, the description is adequate but lacks details on output structure (e.g., price, timestamp) and behavioral properties like rate limits or error states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The description adds the data source 'via CoinGecko' but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a live crypto price feed for specific coins via CoinGecko. It distinguishes from sibling tools which cover data retrieval, database operations, and search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs siblings like x711_data_retrieval or x711_web_search. No information about prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_social_oracleAInspect
Crypto social sentiment — narrative pulse, hype velocity, community data via CoinGecko + DexScreener. Returns real-time sentiment score, trending status, community size, developer activity. Use before trading decisions. Returns: { token, sentiment_score, trending, community_score, dev_score, narrative, source }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token symbol or CoinGecko ID to analyze. Examples: 'ETH', 'bitcoin', 'solana', 'monad'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses output fields and the need for an API key. It does not cover potential side effects (unlikely for a read tool) or rate limits, but for a read-only sentiment tool this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief, front-loaded with the core purpose, and contains no redundant information. Every sentence adds value, including the return format and requirement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a relatively simple tool with one parameter and no output schema, the description adequately explains inputs, outputs, data sources, and usage context. Minor gaps include missing details on data freshness or error states, but overall it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds specific examples (e.g., 'ETH', 'bitcoin') beyond the schema description, enhancing understanding of the parameter's usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as providing crypto social sentiment, naming specific data sources (CoinGecko + DexScreener) and output fields. This distinguishes it from siblings like price_feed or onchain_insight.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use before trading decisions,' providing clear context for use. However, no exclusions or alternatives are mentioned, leaving some ambiguity about when not to use it versus other sentiment or data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_tx_broadcastAInspect
Relay a pre-signed EVM transaction to Base, Ethereum, Arbitrum, Optimism, or Polygon. The agent signs locally — private keys NEVER leave the agent. x711 submits via eth_sendRawTransaction using public RPC. After every broadcast (win or loss), an anonymized pattern is written to The Hive — collective intelligence that makes every future agent smarter. Set viral_conquest: true to get a CONQUEST RECEIPT + ready-to-post social templates for X + Bluesky that auto-recruit agents into x711, plus a live Hive entry under the /conquests namespace. Zero extra latency on the core tx path. Returns: { success, tx_hash, explorer_url, hive, conquest }. Tagline: TX_EXEC + VIRAL_CONQUEST = Your agent doesn't just execute. It becomes the marketing department for the entire Hive.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Target chain. Must match the chain the tx was signed for. | |
| safe_demo | No | Set to true to run a full TX_EXEC demo with zero risk. No real transaction is submitted. x711 fetches live gas prices from Base mainnet, generates a real USD gas estimate, and runs the full CONQUEST + GENESIS_MODE flow. Returns a complete CONQUEST RECEIPT with ready-to-post X + Bluesky templates. GENESIS_MODE activates automatically if this is your first TX_EXEC call. Free, no payment required. Perfect for testing your integration before committing a real tx. Combine with viral_conquest: true for the full experience. | |
| signed_tx | No | Pre-signed raw transaction hex (0x-prefixed). Sign locally with your wallet SDK. | |
| simulation_id | No | Optional ID from a prior tx_simulate call — links simulation + broadcast in the Hive for better pattern matching. | |
| viral_conquest | No | Set to true to activate VIRAL CONQUEST mode: generates a CONQUEST RECEIPT with tx summary, creates ready-to-post social blast templates for X (Twitter) and Bluesky, seeds a LIVE_CONQUEST entry into The Hive /conquests namespace so every other agent sees your win instantly. Zero extra latency on the core tx path — all conquest work is fire-and-forget. Defaults to false for backward compatibility. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description thoroughly discloses key behaviors: local signing (private keys never leave), submission via public RPC, data collection in The Hive after every broadcast, zero extra latency for viral conquest, and the return structure. This covers safety, latency, and data usage comprehensively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose, including marketing language (e.g., 'Tagline: TX_EXEC + VIRAL_CONQUEST...') that does not aid tool selection. While it front-loads the core action, it could be trimmed to be more concise without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of broadcasting transactions and no output schema, the description adequately explains return values, the testing mode (safe_demo), and the optional conquest features. It lacks details on error handling or failure modes, but covers the main usage scenarios well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers all 5 parameters with descriptions (100% coverage). The description adds extra context for safe_demo and viral_conquest, explaining their value beyond schema. However, for basic parameters like signed_tx, it adds little. Overall, it contributes beyond the schema but not extensively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (relay a pre-signed EVM transaction) and the resources (chains like Base, Ethereum, etc.). It distinguishes from the sibling x711_tx_simulate by focusing on broadcast versus simulation. The core purpose is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus x711_tx_simulate or when not to use it. It mentions safe_demo for testing but does not provide clear context for choosing broadcast over simulation. Usage is implied but not explicitly delineated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_tx_simulateAInspect
Simulate an on-chain transaction on Base, Ethereum, Arbitrum, Optimism, or Polygon BEFORE broadcasting. Uses public RPC eth_call + eth_estimateGas — no wallet or API key needed. Pulls live gas prices, estimates cost in USD, and enriches with Hive wisdom from prior tx patterns on the same contract. Always simulate before calling x711_tx_broadcast. Returns: { success: bool, simulation.result, gas.estimate, gas.cost_usd, hive_wisdom, next_step }. Free tier: 3 sims/day (no API key needed).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target contract or wallet address (0x + 40 hex chars). | |
| from | No | Sender address (optional). Defaults to zero address for pure simulation. | |
| chain | No | Target chain. Defaults to 'base'. | |
| value | No | ETH value in wei (as decimal string or 0x hex). Use '0' for non-payable calls. | |
| calldata | No | ABI-encoded calldata hex (0x-prefixed). Omit or use '0x' for plain ETH transfers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: uses public RPC eth_call + eth_estimateGas, no wallet or API key needed, pulls live gas prices, estimates cost in USD, enriches with Hive wisdom, and returns a structured result. Free tier limit is also stated, providing complete transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but well-structured, starting with the core purpose and then detailing features and constraints. Every sentence adds value, though it could be slightly more streamlined. Still, it is effectively concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool complexity (5 parameters, no output schema, no annotations), the description is complete. It covers the input context (chains, addresses), output structure, behavioral constraints (free tier, no API key), and next steps (broadcast). An agent has sufficient information to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully described in the schema. The description does not add significant semantics beyond the schema, but it does mention that 'from' defaults to zero address and 'value' for non-payable calls, though these are already in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: simulate an on-chain transaction on multiple supported chains before broadcasting. It uses specific verbs ('simulate') and resources (transaction on Base, Ethereum, etc.), and distinguishes itself from the sibling tool x711_tx_broadcast by explicitly advising to always simulate before broadcast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: 'Always simulate before calling x711_tx_broadcast,' indicating the primary use case. It also mentions free tier limits (3 sims/day) and that no API key is needed, setting expectations for when and how to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_web_searchAInspect
Multi-source web search with automatic fallback chain: HackerNews Algolia → Wikipedia REST → DuckDuckGo → x711 Hive collective intelligence. Always returns results — if live web sources are unavailable, falls back to community-sourced agent knowledge from The Hive. Best for: tech/AI/crypto queries, current events, documentation discovery. Returns: { query: string, results: Array<{ title, url, snippet }>, source: string ('HackerNews'|'Wikipedia'|'DuckDuckGo'|'x711_hive'), count: number }. Free tier: 10 calls/day, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query. Natural language or keywords. Examples: 'ethereum gas optimization', 'base chain defi protocols', 'monad parallel execution'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so description carries the burden. It states 'Real-time web search' implying read-only, but does not disclose potential rate limits, result limitations, or non-destructive nature explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loading the purpose. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description provides sufficient context: source (DuckDuckGo), real-time nature, and output format. Some missing details like rate limits, but not critical for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear 'query' parameter. The description adds the context of using DuckDuckGo for real-time search, but the parameter semantics are already adequately covered by the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs real-time web search via DuckDuckGo and returns titles and URLs. It uses specific verb 'search' and resource 'web', and differentiates from siblings like x711_data_retrieval and x711_hive_read.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for web search queries but does not provide explicit guidance on when not to use or mention alternatives. No exclusions or context for when to prefer this over other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!