402s.shop
Server Details
12 utility endpoints for AI agents (QR codes,screenshots, scraping, AI summarization). Two payment rails: pre-paid Bearer credits via NowPayments or x402 USDC pay-per-call on Base mainnet.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 16 of 16 tools scored.
Each tool targets a distinct function: crypto prices, balances, ENS, gas; domain checking and WHOIS; web scraping, screenshots, PDF, metadata, emails, tech detection; media generation (OG image, QR code); summarization and transcription. No two tools serve the same purpose.
All 16 tools use consistent snake_case naming with descriptive verb_noun patterns (coin_price, domain_check, extract_emails, gas_price, etc.). No mixed conventions or cryptic abbreviations.
16 tools is well-scoped for a general-purpose utility server. The set covers multiple domains without unnecessary bloat, and each tool feels justified for the server's apparent goal of providing diverse web and crypto utilities.
The server offers a broad surface covering crypto, domains, web analysis, and media generation. Minor gaps include no DNS lookup beyond WHOIS and no crypto transaction capabilities, but core workflows (price, balance, ENS, screenshot, PDF, transcription, summary) are present.
Available Tools
16 toolscoin_priceAInspect
Crypto spot prices in USD via CoinGecko. Supports BTC, ETH, USDC, USDT, SOL, XMR, and 25+ tickers.
| Name | Required | Description | Default |
|---|---|---|---|
| symbols | No | Ticker symbols. Default ["BTC","ETH","USDC"]. Max 25. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It does not disclose rate limits or error behavior, but the tool is straightforward.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with key information, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with no output schema; description adequately covers purpose and parameters, though return format is implied.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions; the description adds examples (BTC, ETH, etc.) and notes 25+ tickers, providing extra context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides crypto spot prices in USD via CoinGecko, lists supported assets, and distinguishes from siblings like gas_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Does not explicitly specify when to use or alternatives; usage is implied but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
domain_checkAInspect
Check whether a domain appears available (heuristic via WHOIS, best-effort).
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain name like example.com. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses heuristic and best-effort nature, but with no annotations, the description carries full burden. Does not mention potential failures, rate limits, or varying reliability beyond 'best-effort'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no filler, directly communicates purpose and limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one parameter, no output schema. Description covers the primary purpose and key limitation (heuristic/best-effort) but omits return value format (e.g., boolean).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of the 1 parameter. The description adds the heuristic context but does not elaborate on parameter format beyond the schema's 'Domain name like example.com'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Check' and resource 'domain availability', and adds context 'heuristic via WHOIS, best-effort'. Differentiates from sibling 'whois' by focusing on availability rather than raw WHOIS data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for a quick, best-effort availability check but does not explicitly mention when to use this tool versus alternatives like 'whois'. No explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ens_resolveAInspect
Resolve ENS name to address (forward) or address to primary ENS name (reverse). Includes avatar where available.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ENS name like "vitalik.eth". Provide name OR address, not both. | |
| address | No | 0x... EVM address for reverse lookup. Provide name OR address, not both. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It mentions forward/reverse lookups and avatars, but does not disclose limitations, error behavior (e.g., unresolved names), rate limits, or the return format. This is insufficient for a resolver tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose. Every word contributes meaning without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description should explain what the tool returns (e.g., address, avatar, ENS name). It only mentions avatar 'where available' but omits the primary return values. This is incomplete for a resolver.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both 'name' and 'address'. The description adds value by clarifying that only one should be provided ('name OR address, not both') and gives an example ('vitalik.eth'), which helps beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states both forward and reverse resolution of ENS names to addresses and vice versa, using specific verbs 'resolve' and specifying 'forward' and 'reverse'. It also mentions avatar retrieval, distinguishing it from any sibling tools which are unrelated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for resolving ENS names or addresses, but does not explicitly state when to use or not use it, nor does it mention prerequisites. However, sibling tools do not overlap in functionality, so no exclusions are needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_emailsAInspect
Find email addresses on a webpage (mailto links + body text), deduplicated.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the sources (mailto links and body text) and deduplication, providing good insight. However, it omits potential error handling or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the action and includes essential details without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete: it does not explain the output format or behavior in edge cases (e.g., no emails found, invalid URL).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'url', described as 'http(s) URL'. The description adds no further semantic meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Find email addresses'), the resource ('on a webpage'), and the sources ('mailto links + body text'), with the output property ('deduplicated'). It is specific and distinct from all sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when email addresses are needed from a webpage, but does not explicitly state when not to use or provide alternatives. Given no close siblings, clarity is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_metadataAInspect
Extract Open Graph, Twitter card, and standard meta tags from any URL.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose any behavioral traits such as handling of redirects, error responses, or request timeouts. The description is minimal and lacks transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that is front-loaded with the core action. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one required parameter and no output schema, the description is somewhat complete but fails to indicate the output format or structure, which would aid agent understanding. It is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description for 'url' stating 'http(s) URL.' The tool description adds no additional meaning beyond that, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('Extract') and the specific resources ('Open Graph, Twitter card, and standard meta tags'). It distinguishes from sibling tools like 'og_image' or 'website_tech' by focusing on meta tags extraction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'og_image' or 'summarize'. The usage is implied from the name and description but not formally stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gas_priceAInspect
Current gas price in gwei across EVM chains. Defaults to Base + Ethereum.
| Name | Required | Description | Default |
|---|---|---|---|
| chains | No | Supported: ethereum, base, arbitrum, optimism, polygon. Default ["base","ethereum"]. Max 5. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full responsibility. It correctly indicates a non-destructive read operation, but lacks details on rate limits, caching, accuracy, or error handling for unsupported chains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys core functionality without unnecessary words. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional param, no output schema), the description is largely complete. However, adding brief note about the output format (e.g., whether it returns a single value or per-chain object) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds the default value and clarifies supported chains beyond the schema's list. This provides useful context for parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns current gas prices in gwei across EVM chains, with a default to Base and Ethereum. The verb+resource is specific and distinguishes it from sibling tools like coin_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving gas prices, but no explicit when-to-use or when-not-to-use guidance is provided. Sibling tools are different enough that confusion is unlikely, but alternatives are not mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
og_imageAInspect
Generate a 1200x630 social media OG image with title + subtitle.
| Name | Required | Description | Default |
|---|---|---|---|
| theme | No | Default "dark". | |
| title | Yes | Main heading (1-200 chars). | |
| subtitle | No | Optional subhead (up to 300 chars). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the image dimensions and that it generates an image from title and subtitle, but does not mention any potential behaviors like caching, cost, or output format (e.g., URL vs. binary).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It front-loades the key information (dimensions and content) and is efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description is adequate but lacks details about the output (e.g., image format, how the result is returned). No output schema exists, so the description could have added more context about return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning beyond the input schema by specifying the image dimensions (1200x630) and the combination of title + subtitle. Since schema coverage is 100%, the baseline is 3, and the description provides additional context for the title and subtitle parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate', specifies the resource '1200x630 social media OG image', and lists the components 'title + subtitle'. It uniquely identifies the tool's function among siblings which are unrelated to image generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool over alternatives, nor are there any exclusions or context about prerequisites. The description only states what it does, not when it should be preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pdf_from_urlBInspect
Render any URL to a print-ready PDF.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL. | |
| format | No | Default "A4". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description does not disclose behavioral aspects like timeouts, authentication needs, page size handling, or what happens if the URL fails to render. 'Print-ready PDF' implies output but lacks detail on format and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, perfectly concise with no extraneous words. It is front-loaded and immediately communicates the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description should clarify the return value (e.g., file content, base64, path). It does not. Also lacks details on supported URL types, page dimensions, or error handling, leaving significant ambiguity for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters have descriptions in the schema (url: 'http(s) URL.', format: 'Default "A4".'), giving 100% coverage. The tool description adds no additional meaning beyond what the schema already provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Render any URL to a print-ready PDF' clearly states the action (render), the input (any URL), and the output (print-ready PDF). It is distinct from sibling tools like screenshot or extract_metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as screenshot or word_count. There is no mention of prerequisites, limitations, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
qr_codeBInspect
Generate a QR code PNG from text or URL.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Width in pixels (100-1000, default 300). | |
| text | Yes | Text/URL to encode (1-2048 chars). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states 'Generate a QR code PNG', which implies a creation action but does not clarify if it is read-only, requires permissions, or has any side effects. No further behavioral context is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence with no wasted words. It is concise and front-loaded with the purpose, though could slightly benefit from a more structured format adding minimal context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain the return format (e.g., PNG binary) or error handling. It is adequate for a simple tool but lacks details on output and constraints beyond the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so both parameters (text, size) are well-documented. The description adds 'from text or URL', which just restates the text parameter, providing no additional semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Generate', the output 'QR code PNG', and the input 'from text or URL'. It is specific and distinguishes from sibling tools, none of which generate QR codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for generating QR codes from text or URLs, but provides no explicit guidance on when to use this tool versus alternatives, nor any when-not or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screenshotBInspect
Take a PNG screenshot of any URL using headless Chromium.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL to capture. | |
| fullPage | No | Scroll to capture full page (default false). | |
| viewport | No | Default 1280x800. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses use of headless Chromium but omits important behaviors: JavaScript execution, authentication handling, timeouts, error scenarios, or output format details. No annotations provided to compensate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 10 words, conveys core purpose without redundancy. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details on return format (binary, base64, path), error handling, size limits, or timeouts. Given no output schema and 3 parameters, description is too sparse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so description adds minimal value. Describes basic functionality but no additional semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states verb (take), resource (PNG screenshot of any URL), and tool (headless Chromium). Clearly distinguishes from sibling tools like pdf_from_url (PDF) and og_image (OG image).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for screenshots but provides no when-to-use or when-not-to-use guidance. No mention of alternatives like pdf_from_url or limitations (e.g., dynamic content).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
summarizeAInspect
AI-powered summary (Claude Haiku 4.5) of supplied text or fetched URL.
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | OR a URL to fetch and summarize. | |
| text | No | Direct text to summarize (50-50000 chars). | |
| length | No | Default "medium". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It notes the AI model (Claude Haiku 4.5) but fails to mention limitations like maximum token count, error handling, or what happens if both 'url' and 'text' are provided. The schema covers character limits (50-50000) but the description omits this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that immediately conveys the tool's purpose. It is well-structured and front-loaded, but could include slightly more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is minimal for a tool with three parameters and no output schema. It lacks information on output format, error responses, and example usage. Given the complexity, the description should provide more context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds no additional meaning beyond what is already in the parameter descriptions. It does not clarify mutual exclusivity of 'url' and 'text' or explain the 'length' enum options beyond 'default medium'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'AI-powered summary (Claude Haiku 4.5) of supplied text or fetched URL.' It includes the specific AI model and resource types, distinguishing it from sibling tools like domain_check or extract_emails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for summarization but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention when not to use it or specify prerequisites. The context of sibling tools suggests differentiation, but the description lacks direct instruction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_balanceAInspect
Native + USDC balance for an EVM address across multiple chains. Defaults to Base + Ethereum.
| Name | Required | Description | Default |
|---|---|---|---|
| chains | No | Supported: ethereum, base, arbitrum, optimism, polygon. Default ["base","ethereum"]. Max 5. | |
| address | Yes | 0x... EVM address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must cover behavioral traits. It indicates a read operation (getting balance) but does not mention whether the operation is destructive, requires authentication, or has rate limits. The description adds minimal transparency beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the most important information. No unnecessary words; every phrase adds value. Very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the purpose is clear, the description does not explain the return value format (e.g., how balances are represented, if native and USDC are separate). Given no output schema, this is a gap. However, the description covers the core functionality adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, both parameters have descriptions. The description adds value by clarifying that the balance includes native and USDC, and specifying default chains (Base + Ethereum) and supported chains, which goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states exactly what the tool does: get native + USDC balance for an EVM address across multiple chains. It clearly specifies the resource (wallet balance) and scope (multiple chains with defaults). None of the sibling tools provide this functionality, so it is well-distinguished.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context (balance checking across chains) but does not explicitly say when to use this tool versus siblings like coin_price or gas_price. No exclusions or alternative suggestions are provided, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
website_techAInspect
Detect framework, CMS, analytics, and hosting/CDN of any website.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description minimally suggests a read operation ('detect') but fails to disclose behavioral details like network requests, error handling, or what happens with invalid URLs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence with no extraneous words; front-loaded with the action and target.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple, the description lacks context about return format (e.g., JSON with tech names) and doesn't clarify what 'detect' entails, leaving ambiguity for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single param 'url', so the description adds no new semantic value beyond the schema's 'http(s) URL.' Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does ('detect framework, CMS, analytics, and hosting/CDN') with a specific verb and resource. It distinguishes itself from siblings like 'extract_metadata' by focusing on technology stack detection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for tech detection but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whoisAInspect
WHOIS lookup with normalized fields (registrar, dates, nameservers, status).
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain name like example.com. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It mentions normalized fields but lacks disclosure on caching, rate limits, TLD support, or error handling (e.g., non-existent domains). Partial transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no redundancy. Core action and key output fields are front-loaded. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 1-parameter tool without output schema. Lists returned field categories. Minor gap: no mention of output format or limitations, but generally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema for the 'domain' parameter (e.g., no clarification on punycode or format). No extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it performs a WHOIS lookup, specifies the resource (domain), and lists normalized fields returned (registrar, dates, nameservers, status). This is specific and distinguishes it from siblings like domain_check or extract_emails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like domain_check. No prerequisites, exclusions, or context provided for proper selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
word_countBInspect
Count words and reading time on any webpage.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | http(s) URL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits, but it only states the basic action. It does not cover how the tool handles large pages, JavaScript rendering, or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, well-front-loaded sentence that is concise and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 param, no output schema), the description should provide more context on word counting methodology, reading time calculation, or limitations. It is incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (url parameter described as 'http(s) URL'), so baseline is 3. The description adds no additional semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool counts words and reading time on any webpage, which is a specific verb-resource combination. It distinguishes from sibling tools like domain_check or screenshot, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are there any usage constraints or contexts mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
youtube_transcriptBInspect
Fetch a YouTube video transcript with timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | YouTube watch / shorts / embed / youtu.be URL. | |
| lang | No | BCP-47 language code (default "en"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not mention error cases (e.g., no transcript available), rate limits, or authentication needs, leaving significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence front-loaded with key information, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description does not specify return format or error handling, making it incomplete for a tool with 2 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with both parameters described. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch' and the resource 'YouTube video transcript with timestamps', which is specific and distinguishes it from siblings like 'extract_metadata' or 'summarize'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, but the purpose is straightforward enough that an agent can infer its usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!