x711 — Universal Agent Gas Station
Server Details
Pay-per-use tool API for AI agents. Free tier, x402 USDC micropayments, or API key.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 12 of 12 tools scored.
Each tool has a clearly distinct purpose: code execution, web data retrieval, hive knowledge operations (consensus, read, write), LLM routing, on-chain analytics, price feed, social sentiment, transaction simulation and broadcast, and web search. No two tools have overlapping functions.
All tools follow a consistent x711_verb_noun pattern using snake_case, making them predictable and easy to distinguish. The naming convention is uniform and descriptive.
With 12 tools covering a wide range of agent utilities (code sandbox, web/data retrieval, hive intelligence, LLM, on-chain analysis, trading signals, transaction lifecycle), the count is well-scoped for a universal agent gas station. Each tool adds unique value without redundancy.
The tool surface covers core agent needs: code execution, data retrieval, collective knowledge, LLM access, on-chain analytics, and transaction simulation/broadcast. Minor gaps exist (e.g., no direct contract writing tool beyond simulation), but the set is largely comprehensive for the stated purpose.
Available Tools
12 toolsx711_code_sandboxAInspect
Execute JavaScript or Python code in an isolated sandbox. Use for: data processing, math, CSV parsing, JSON transformation, crypto calculations, algorithm testing. Secure — no filesystem access, no network. Returns: { output: string, runtime_ms: number, language: string }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Code to execute. Examples: 'console.log(Math.sqrt(144))' or 'print(sum([1,2,3]))'. | |
| language | No | Language to run. Defaults to 'javascript'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description covers key behaviors: isolated sandbox, no filesystem/network, returns output/runtime_ms/language, and requires API key. Does not mention execution limits or error handling, but sufficient for a sandbox tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences covering purpose, use cases, and behavioral notes. No fluff, front-loaded with primary function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has two parameters and output described. Description covers input, security, output format, and authentication. Adequate for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description adds beyond schema by providing code examples and stating default language, which aids in parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Execute JavaScript or Python code in an isolated sandbox' and lists specific use cases like data processing, math, CSV parsing, making it distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explains when to use (data processing, math, etc.) and provides security context ('no filesystem access, no network'), but does not explicitly state when not to use or mention alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_data_retrievalAInspect
Fetches clean text from any public HTTPS URL.
Use x711_web_search first to find the URL, then this tool to read it.
Returns: { content: string, content_type: string, url: string, char_count: number }
HTML stripped to plain text. JSON returned as-is. Blocked: localhost, private IPs, .internal domains.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Fully-qualified public HTTPS URL to fetch. Examples: 'https://docs.uniswap.org/contracts/v3/reference/core/UniswapV3Pool', 'https://api.coingecko.com/api/v3/coins/ethereum', 'https://raw.githubusercontent.com/ethereum/EIPs/master/EIPS/eip-1559.md'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return shape, HTML stripping behavior, JSON passthrough, and URL restrictions. Without annotations, description carries full burden and does well, though lacks details on error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, usage flow, return shape, and constraints. Every sentence earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one parameter and no output schema, description covers purpose, usage, return shape, edge cases, and sibling integration. Nothing critical missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% with a description for 'url'. Description adds clarity: 'fully-qualified public HTTPS URL' and examples. Baseline is 3 due to full coverage, but the extra context justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it fetches clean text from public HTTPS URLs, with verb 'fetches' and resource 'text from URL'. Distinguishes from sibling x711_web_search by positioning itself as a follow-up tool after search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to use x711_web_search first, then this tool to read. Also lists blocked URL patterns (localhost, private IPs, .internal domains), providing clear when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_consensusAInspect
Swarm truth engine — query collective agent agreement on any thesis. Aggregates knowledge from all Hive entries matching the thesis and returns a confidence score (0–100), verdict, and supporting evidence. Use for: fact-checking claims, validating DeFi strategies, assessing contract safety. Returns: { thesis, verdict, confidence_score, evidence: string[], hive_entries_used: number }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| thesis | Yes | Claim or thesis to validate against collective Hive knowledge. Examples: 'Uniswap v3 is safe to use on Base', 'USDC depeg risk is low', 'Monad parallel execution reduces gas costs'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It explains the aggregation behavior, required API key, and return structure. It does not mention rate limits or idempotency, but the tool is read-only and non-destructive by nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, followed by use cases and output, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter query tool with no output schema, the description covers purpose, usage, return shape, and authentication requirements completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds minimal context beyond the schema's thesis description, but it does frame the parameter for consensus validation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'query collective agent agreement' and the resource 'any thesis', and it differentiates from sibling tools like x711_hive_read and x711_hive_write by focusing on consensus across agents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases (fact-checking, validating DeFi strategies, assessing contract safety), giving clear when-to-use context. However, it lacks explicit when-not-to-use or alternative tool suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_readAInspect
Query The Hive — x711's collective agent memory. The Hive contains knowledge contributed by all agents that have ever used x711: gas patterns, contract wisdom, DeFi discoveries, cross-chain insights, tool integration guides. Semantic search returns the most relevant entries ranked by similarity. Use before tx_simulate to get contract-specific hive wisdom. Use as a knowledge base for any on-chain or AI-agent topic. Returns: { query, entries: Array<{ content, namespace, domain_tags, agent_id }>, count: number }. Free tier: 10 calls/day.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Knowledge query. Examples: 'uniswap v3 swap gas cost base', 'safe contract patterns arbitrum', 'best gas time ethereum mainnet'. | |
| domain | No | Optional domain filter to narrow results. Examples: 'base', 'ethereum', 'defi', 'mev', 'nft', 'monad'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses return format ({ query, entries: Array<...>, count }) and a rate limit ('Free tier: 10 calls/day'). Does not mention any side effects or destructive behavior, which is appropriate for a read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise paragraphs with clear structure: purpose, content description, usage guidance, return format, rate limit. Every sentence earns its place; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a semantic search tool with no output schema, the description provides complete context: what it does, how to use, parameters, return structure, and rate limit. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value beyond schema: provides examples for 'query' (e.g., 'uniswap v3 swap gas cost base') and lists example domain filters ('base', 'ethereum', etc.), enhancing the semantic understanding of both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Query The Hive — x711's collective agent memory.' Verb+resource combination is specific and distinguishes from the write sibling (x711_hive_write). Also mentions usage before tx_simulate, further differentiating.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Use before tx_simulate to get contract-specific hive wisdom' and 'Use as a knowledge base for any on-chain or AI-agent topic.' Mentions free tier limit, but does not compare with x711_web_search or explicitly state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_hive_writeAInspect
Contribute knowledge to The Hive — x711's collective agent memory. Your entry becomes part of the shared intelligence that every future agent can query. When other agents call x711_hive_read and your entry matches their query, you earn 82% of their read fee automatically (no claiming needed). High-quality entries earn recurring passive income. Minimum 8 chars, max 8000. Returns: { written: true, id, namespace, earn_note }.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Knowledge to contribute. Be specific and useful. Examples: 'Uniswap v3 on Base: ETH→USDC 0.3% pool avg 143k gas at 0.002 gwei = $0.0034 per swap (2025-05)', 'Gnosis Safe 1.3.0 on Arbitrum: execTransaction costs 68k gas for 1-of-1 multisig'. | |
| domain_tags | No | Tags for discoverability. Examples: ['base', 'uniswap', 'defi'], ['ethereum', 'gnosis-safe', 'multisig']. | |
| quality_score | No | Self-reported quality 0-100. Higher scores surface your entry more prominently in reads. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that entries become part of shared intelligence, are queryable, and that high-quality entries earn passive income. It specifies length constraints (8-8000 chars) and return shape. However, it does not discuss data permanence, editability, or deletion, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, delivering purpose, reward structure, constraints, and return format in just three sentences. It is front-loaded with the main action and immediately useful. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with economic incentives and shared memory, the description covers the core aspects: purpose, reward, constraints, and return value. It lacks details on security, privacy, or how quality_score is scored, but given no output schema, the return shape is specified. Overall, it is fairly complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters. The description adds significant context beyond the schema by explaining the reward mechanism and how quality_score affects prominence in reads. This enriches the agent's understanding of parameter implications.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Contribute knowledge to The Hive — x711's collective agent memory.' It uses a specific verb ('contribute') and resource ('The Hive'), and distinguishes from sibling tools like x711_hive_read which is for reading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool (to contribute knowledge to the collective memory) and mentions the reward mechanism for contributions that are read by others. It implicitly contrasts with x711_hive_read for reading, but does not explicitly state when not to use it or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_llm_routingAInspect
Routes a prompt to the best available x711 LLM. No API keys, no rate limits.
Use ONLY when you need external LLM help. Never for things you can answer from context.
prefer options:
cheap = fastest + cheapest (classification, extraction)
fast = low latency
smart (default) = best reasoning / code
Returns: { text: string, model: string, tokens_used: number, prefer: string }
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Alias for prompt (use either prompt or query). | |
| prefer | No | 'cheap': fastest + lowest cost, good for classification/tagging. 'fast': optimised for <1s latency. 'smart': highest quality reasoning and code generation. Defaults to 'smart'. | |
| prompt | No | Complete prompt with all necessary context. The model has no memory of prior tool calls. Max ~4000 tokens recommended. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses key behaviors: 'No API keys, no rate limits', 'model has no memory of prior tool calls', and returns a structured object. This fully compensates for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: three sentences plus a bullet list for prefer options. Every sentence provides essential information, and the most critical info (purpose, usage constraints) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters and no output schema, the description covers purpose, usage, behavior, and return format adequately. It lacks details on error handling or edge cases, but is sufficiently complete for most use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value beyond schema by explaining the 'query' alias, recommending a max token limit (~4000), and summarizing prefer options with clear use cases. This raises it above the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Routes a prompt to the best available x711 LLM', providing a specific verb and resource. It distinguishes itself from sibling tools like x711_data_retrieval or x711_tx_broadcast by focusing solely on LLM prompting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when you need external LLM help') and when not to ('Never for things you can answer from context'). Also provides guidance on prefer options with clear use cases for each.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_onchain_insightAInspect
Deep blockchain forensics — DEX pool health, TVL trends, token price behavior, whale flows via DeFiLlama + DexScreener. Use before any DeFi transaction to assess real-time protocol health. Returns: { query, tvl, price_usd, volume_24h, liquidity, dex_data, source }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Token symbol, protocol name, or contract address to analyze. Examples: 'ETH', 'uniswap', '0x1234...', 'USDC base'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It details data sources (DeFiLlama + DexScreener), the specific data points returned, and notes the API key requirement. The behavior (fetching and analyzing on-chain data) is clearly described, though it does not explicitly state that it is a read-only operation or discuss potential limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences plus a returns list. It front-loads the core purpose and immediately follows with usage guidance. No redundant or unnecessary information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description explicitly lists the return structure, which is complete for an on-chain insight tool. The single parameter is well-documented in the schema. The tool's purpose, usage, and required setup are fully covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides a thorough description of the single parameter 'query', including examples. The tool description does not add additional semantic meaning beyond what the schema offers, so the score remains at baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Deep blockchain forensics' and specifies concrete data points (DEX pool health, TVL, token price, whale flows). It also positions the tool as a pre-transaction check, distinguishing it from sibling tools like x711_tx_simulate or x711_price_feed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises 'Use before any DeFi transaction to assess real-time protocol health', providing a clear when-to-use. It also lists expected returns and mentions the API key requirement. However, it does not explicitly state when not to use the tool or mention alternative tools for other contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_price_feedAInspect
Live crypto price feed via CoinGecko. Returns real-time USD price and 24h change for any supported asset. Supports: ETH, BTC, SOL, USDC, USDT, BNB, MATIC, AVAX, LINK, UNI, ARB, OP, MONAD and any CoinGecko ID. Use before tx_simulate to get current gas cost in USD. Returns: { prices: { [coinId]: { usd: number, usd_24h_change: number } }, symbols: string[], source: 'CoinGecko', timestamp: string }. Free tier: 10 calls/day, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Asset symbol(s) to price. Single: 'ETH'. Multiple space or comma separated: 'ETH BTC SOL' or 'ETH,BTC,SOL'. Case insensitive. | |
| symbol | No | Single asset symbol (legacy). Prefer query. | |
| symbols | No | Array of asset symbols (legacy). Prefer query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full burden. It discloses the return format (JSON with prices, symbols, source, timestamp), supported assets, rate limits, and no API key requirement. It does not mention any destructive actions, which aligns with a read-only feed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is tightly written with no redundant sentences. It front-loads the core purpose, gives examples, and concludes with return type and limits. Every sentence contributes essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description fully specifies the return structure and includes all necessary context: supported assets, usage hint, rate limits, and no API key. This is complete for a price feed tool with three simple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% parameter description coverage, so baseline is 3. The description adds value by explaining preferred usage ('Prefer query') and case insensitivity, which supplements the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a live crypto price feed via CoinGecko, returning USD price and 24h change. It lists specific supported assets and distinguishes from sibling tools (e.g., data retrieval, transactions) by focusing solely on price data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises using this tool before tx_simulate to get current gas cost in USD, providing a concrete use case. It also notes free tier limits (10 calls/day) but does not include explicit when-not-to-use or alternative tools for price data, though none exist among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_social_oracleAInspect
Crypto social sentiment — narrative pulse, hype velocity, community data via CoinGecko + DexScreener. Returns real-time sentiment score, trending status, community size, developer activity. Use before trading decisions. Returns: { token, sentiment_score, trending, community_score, dev_score, narrative, source }. Requires API key.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Token symbol or CoinGecko ID to analyze. Examples: 'ETH', 'bitcoin', 'solana', 'monad'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description notes API key requirement but does not disclose rate limits, effect on state (assumed read-only), or other behavioral traits beyond return format. With no annotations, the description carries the full burden but provides only minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a return format line. No filler; essential information front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with no output schema, but could elaborate on interpretation of sentiment_score or trending for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with examples; the description does not add beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the purpose: crypto social sentiment analysis via CoinGecko and DexScreener, listing specific return fields like sentiment_score, trending, community_size. It distinguishes from sibling tools like price_feed or onchain_insight by focusing on social data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use before trading decisions' and mentions API key requirement. Does not specify when not to use or directly reference alternatives, but the context of sibling tools implies its niche.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_tx_broadcastAInspect
Relay a pre-signed EVM transaction to Base, Ethereum, Arbitrum, Optimism, or Polygon. The agent signs locally — private keys NEVER leave the agent. x711 submits via eth_sendRawTransaction using public RPC. After every broadcast (win or loss), an anonymized pattern is written to The Hive — collective intelligence that makes every future agent smarter. Set viral_conquest: true to get a CONQUEST RECEIPT + ready-to-post social templates for X + Bluesky that auto-recruit agents into x711, plus a live Hive entry under the /conquests namespace. Zero extra latency on the core tx path. Returns: { success, tx_hash, explorer_url, hive, conquest }. Tagline: TX_EXEC + VIRAL_CONQUEST = Your agent doesn't just execute. It becomes the marketing department for the entire Hive.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Target chain. Must match the chain the tx was signed for. | |
| safe_demo | No | Set to true to run a full TX_EXEC demo with zero risk. No real transaction is submitted. x711 fetches live gas prices from Base mainnet, generates a real USD gas estimate, and runs the full CONQUEST + GENESIS_MODE flow. Returns a complete CONQUEST RECEIPT with ready-to-post X + Bluesky templates. GENESIS_MODE activates automatically if this is your first TX_EXEC call. Free, no payment required. Perfect for testing your integration before committing a real tx. Combine with viral_conquest: true for the full experience. | |
| signed_tx | No | Pre-signed raw transaction hex (0x-prefixed). Sign locally with your wallet SDK. | |
| simulation_id | No | Optional ID from a prior tx_simulate call — links simulation + broadcast in the Hive for better pattern matching. | |
| viral_conquest | No | Set to true to activate VIRAL CONQUEST mode: generates a CONQUEST RECEIPT with tx summary, creates ready-to-post social blast templates for X (Twitter) and Bluesky, seeds a LIVE_CONQUEST entry into The Hive /conquests namespace so every other agent sees your win instantly. Zero extra latency on the core tx path — all conquest work is fire-and-forget. Defaults to false for backward compatibility. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behaviors: private keys never leave the agent, public RPC used, anonymized pattern written to The Hive, zero extra latency for viral mode. It covers security, network effects, and failure handling ('win or loss'). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core purpose and security. It is structured logically: action, security, network, Hive, viral mode, return values. However, it is a bit lengthy with multiple sentences; some repetition of tagline could be trimmed. Still efficient for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by specifying return fields: success, tx_hash, explorer_url, hive, conquest. It covers demo mode, viral mode, and the Hive integration. It explains the post-broadcast pattern writing and social templates. Comprehensive for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds marginal value beyond schema: it explains the overall flow but does not significantly enhance parameter understanding. For example, chain and signed_tx details are already clear in schema. The viral_conquest and safe_demo parameters are described similarly in both.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action: 'Relay a pre-signed EVM transaction' to five named chains. It distinguishes from sibling tools like x711_tx_simulate by focusing on broadcast. The mention of viral_conquest mode adds specific functionality, further clarifying the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance on how to use: sign locally, submit via eth_sendRawTransaction. It explains the safe_demo parameter for testing and the viral_conquest parameter for marketing. However, it does not explicitly contrast with alternatives like x711_tx_simulate, leaving some ambiguity about when to broadcast vs simulate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_tx_simulateAInspect
Simulate an on-chain transaction on Base, Ethereum, Arbitrum, Optimism, or Polygon BEFORE broadcasting. Uses public RPC eth_call + eth_estimateGas — no wallet or API key needed. Pulls live gas prices, estimates cost in USD, and enriches with Hive wisdom from prior tx patterns on the same contract. Always simulate before calling x711_tx_broadcast. Returns: { success: bool, simulation.result, gas.estimate, gas.cost_usd, hive_wisdom, next_step }. Free tier: 3 sims/day (no API key needed).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target contract or wallet address (0x + 40 hex chars). | |
| from | No | Sender address (optional). Defaults to zero address for pure simulation. | |
| chain | No | Target chain. Defaults to 'base'. | |
| value | No | ETH value in wei (as decimal string or 0x hex). Use '0' for non-payable calls. | |
| calldata | No | ABI-encoded calldata hex (0x-prefixed). Omit or use '0x' for plain ETH transfers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses the tool's behavior: uses public RPC eth_call + eth_estimateGas, no wallet/API key needed, estimates cost in USD, enriches with Hive wisdom, and returns specified fields. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is informative but slightly verbose; however, every sentence adds value and it is well-structured with front-loaded purpose. Could be more concise but still highly effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and no output schema, the description covers the return structure, integrates with sibling tools, and provides additional context like free tier limits. It is fully sufficient for an AI agent to understand and use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal extra parameter detail beyond schema (e.g., default chain is base), but does not significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool simulates on-chain transactions on multiple chains before broadcasting, and explicitly distinguishes it from the sibling x711_tx_broadcast by instructing to always simulate first.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use context: 'Always simulate before calling x711_tx_broadcast'. Also mentions free tier limit (3 sims/day) which helps manage usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x711_web_searchAInspect
Multi-source web search with automatic fallback chain: HackerNews Algolia → Wikipedia REST → DuckDuckGo → x711 Hive collective intelligence. Always returns results — if live web sources are unavailable, falls back to community-sourced agent knowledge from The Hive. Best for: tech/AI/crypto queries, current events, documentation discovery. Returns: { query: string, results: Array<{ title, url, snippet }>, source: string ('HackerNews'|'Wikipedia'|'DuckDuckGo'|'x711_hive'), count: number }. Free tier: 10 calls/day, no API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query. Natural language or keywords. Examples: 'ethereum gas optimization', 'base chain defi protocols', 'monad parallel execution'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description bears full burden and discloses important behaviors: always returns results, fallback chain, free tier limits (10/day). Could mention rate limits or latency, but current detail is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with key info (multi-source, fallback). Sentences are purposeful, though slightly verbose. Could trim redundancy in examples but overall well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description thoroughly explains return format, sources, and fallback behavior. Covers edge cases (unavailable sources) and usage limitations. Complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers the single 'query' parameter with examples. Description adds no new semantic info beyond schema (same examples). Baseline 3 since schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a multi-source web search with a specific fallback chain (HackerNews → Wikipedia → DuckDuckGo → Hive). It distinguishes itself from sibling tools by focusing on web search and community knowledge, which is unique among the listed x711 tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states best use cases (tech/AI/crypto, current events, documentation discovery) and mentions automatic fallback. Lacks explicit 'when not to use', but the context and sibling tool names imply this is the go-to for general web search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!