Askew x402
Server Details
DeFi yields, staking routing, semantic research & security intel via x402 micro-payments on Base
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- rubix1138/askew-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 8 of 8 tools scored.
Each tool targets a distinct function (health, intel feed, intel threats, offers, research, staking, yields). However, askew_intel_feed and askew_intel_threats both cover intelligence, with feed being a superset, which could cause minor confusion. Descriptions are clear enough to disambiguate.
All tools follow a consistent `askew_<noun_phrase>` format in snake_case. No mixing of conventions or verb styles. Predictable and readable.
8 tools is well-scoped for a server offering multiple distinct services (health, intelligence, research, staking, yields). Each tool serves a clear purpose without bloat.
Covers core functionalities: health check, intelligence, research, offers catalog, staking, and yield comparison. Missing a tool to execute purchases or transactions, but that is likely outside the server's scope. Minor gap for a fully self-contained agent workflow.
Available Tools
8 toolsaskew_healthAInspect
Service status for the Askew x402 endpoint. Returns wallet address, network, current pricing, and payment summary. Free — use to verify the service is operational before making paid calls.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility. It discloses the tool is free and returns specific fields (wallet, network, pricing, payment summary), which is transparent about its non-destructive, informational behavior. Minor omissions: no mention of response format or behavior if service is unavailable, but still adequate for a simple health check.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the core purpose and key context. Every word serves a purpose, with no redundancy or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and minimal annotations, the description fully covers what the tool does and when to use it. It is self-contained and provides enough information for an agent to select and invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema is empty with 0 parameters, so schema coverage is 100% and the description does not need to add parameter information. Baseline 3 is appropriate as the description adds no parameter-level semantics beyond what the schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides service status for the Askew x402 endpoint, specifying it returns wallet address, network, pricing, and payment summary. It also explicitly says it is free and for verifying service, which distinguishes it from sibling tools that cover intel, threats, offers, research, staking, and yields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description instructs to use this tool to verify the service is operational before making paid calls, providing clear context for its use. It does not explicitly mention alternatives or when not to use, but the free nature and verification purpose strongly imply its role relative to paid siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_intel_feedAInspect
Aggregated intelligence feed combining research findings, active security threats, and live staking APY snapshot in a single call ($0.005 USDC). Sources: ChromaDB research library + Guardian log + staking.db. Best for: broad situational awareness — replaces three separate calls. Requires x402 payment on Base mainnet.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description covers cost and payment method, but does not disclose potential side effects, rate limits, or failure modes. The cost and sources are transparently stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact, with each sentence adding value: purpose, sources, cost, usage recommendation. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers what the tool returns (research, threats, staking APY) and cost, but omits output format. Given no output schema, a brief note on structure would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters and 100% coverage, so the description does not need to add parameter details. Baseline is 4, and the description adds context about the combined data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it aggregates intelligence from three sources into one call, replacing three separate calls. It names the sources and the cost, making the purpose specific and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Best for: broad situational awareness — replaces three separate calls' and notes the x402 payment requirement on Base mainnet, providing clear when-to-use and prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_intel_threatsAInspect
Live threat intelligence from the Askew Guardian security monitor ($0.002 USDC). Returns WARNING/ERROR/CRITICAL log entries from the last N hours, deduplicated by category. Covers auth failures, disk, network anomalies, crypto monitoring alerts. Near-real-time Guardian log view. Best for: security triage before agent operations. Requires x402 payment on Base mainnet.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Lookback window in hours (default: 24, optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses payment requirement ($0.002 USDC, requires x402 on Base mainnet), deduplication by category, and near-real-time nature. Does not mention rate limits or failure behavior, but covers key behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences, each adding distinct value: identity/cost, output characteristics, and use case/payment. No redundancy, front-loaded with essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema and no annotations, description covers tool's purpose, output types, categories, payment, and use case. Minor omission of log entry format details, but sufficient for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter (hours). Description adds no new details beyond the schema, simply restating the lookback window. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns threat intelligence log entries (WARNING/ERROR/CRITICAL) from Askew Guardian. Differentiates from siblings like askew_health and askew_intel_feed by specifying its security monitoring focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Best for: security triage before agent operations', providing clear context for when to use. Does not include explicit when-not-to-use or alternatives, but the use case is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_offersAInspect
Curated catalog of all available paid Askew endpoints with pricing, sample calls, and buyer intent context. Best starting point for agents exploring what Askew sells. No payment required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears transparency. It notes 'No payment required' and implies read-only catalog access. It discloses what the tool provides (pricing, sample calls, buyer intent). More detail on update frequency or data source could improve, but it is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose and key attributes. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description fully covers what the tool does and what it returns. It mentions key elements (pricing, sample calls, buyer intent) and the starting point recommendation. Complete for a catalog tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description adds value by explaining the output contents (pricing, sample calls, buyer intent context, best starting point). Baseline is 4 for zero params, and the description exceeds minimal expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'curated catalog of all available paid Askew endpoints' with pricing, sample calls, and buyer intent context. It is distinct from sibling tools like askew_health, askew_yields, etc., which suggest specific functionalities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly recommends as 'best starting point for agents exploring what Askew sells', implying first-use context. It lacks explicit when-not-to-use or alternatives, but the recommendation is clear enough for guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_research_queryAInspect
Semantic search over 500+ Askew agent-economy research findings, operational insights, and experiments ($0.003 USDC). Powered by ChromaDB vector search, updated every 12h by the Research agent. Collections: research_findings | agent_insights | experiments. Best for: finding what Askew already learned about agent monetization or DeFi strategy. Requires x402 payment on Base mainnet.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query (required) | |
| limit | No | Number of results to return, 1-20 (default: 5) | |
| collection | No | Collection to search: research_findings | agent_insights | experiments (default: research_findings) | research_findings |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behaviors: payment requirement ('$0.003 USDC'), update cadence ('updated every 12h by the Research agent'), and underlying technology ('Powered by ChromaDB vector search'). It does not mention rate limits or result format, but for a read-only search tool the transparency is solid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is composed of four short sentences that front-load the core purpose. It includes cost, technology, update frequency, collections, and best use case. Every sentence adds value; no fluff. Slightly dense but well-structured for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the data scope, update frequency, cost, and available collections. However, it does not describe the output format (e.g., ranked results, scores, metadata) which is important for a search tool. No output schema exists, so the description should have covered return structure. Otherwise, it is adequate for basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters are fully described in the schema (q, limit, collection) with clear defaults and ranges (100% coverage). The description adds no new parameter-specific semantics beyond schema; it contextualizes the collection values but does not enhance understanding of the parameters themselves. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Semantic search over 500+ Askew agent-economy research findings, operational insights, and experiments', specifying the verb 'semantic search' and the resource. It distinguishes itself from sibling tools by focusing on a specific corpus of agent-economy research, and provides a concrete use case: 'Best for: finding what Askew already learned about agent monetization or DeFi strategy.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit guidance on when to use the tool ('Best for: finding what Askew already learned...') and mentions a critical requirement ('Requires x402 payment on Base mainnet'). However, it does not explicitly exclude any scenarios or mention alternative tools from the sibling list, so some differentiation is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_staking_routerAInspect
Staking yield router for SOL and ATOM ($0.003 USDC). Compares native PoS staking vs liquid staking alternatives and returns a routing recommendation per chain. Data from DefiLlama + staking portfolio, cached up to 6h. Best for: deciding native vs liquid staking. Requires x402 payment on Base mainnet.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses data sources (DefiLlama + staking portfolio), caching (up to 6h), and payment requirement. Lacks details on output format or failure modes, but is not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with purpose and token scope, followed by data sources, cache, best-for, and payment. Each sentence adds value, though slightly verbose with '($0.003 USDC)' which could be separate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, tokens, comparison type, data sources, cache, and payment. Lacks output schema and detailed behavior description, but for a 0-param tool, it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has 0 parameters, so schema coverage is 100%. Description adds no parameter info needed; baseline 4 for no params.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs and resources: 'staking yield router', 'compares native PoS staking vs liquid staking', and specifies tokens SOL and ATOM. It clearly distinguishes from siblings like askew_yields by focusing on routing recommendations for a specific decision.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Best for: deciding native vs liquid staking' giving clear context. Implicitly excludes other yield queries. Mentions payment requirement, but does not explicitly list alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_yieldsAInspect
Live DeFi yield comparison across 5 chains in one call ($0.002 USDC). Returns top 5 pools per chain (Solana, Cosmos, Ethereum, Base, Arbitrum) with APY, TVL, and project name. Data from DefiLlama, cached up to 6h. Best for: quick yield scanning before moving capital. Requires x402 payment on Base mainnet — the tool result will contain the payment requirements envelope if unpaid.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses payment requirement on Base mainnet, caching up to 6h, data source (DefiLlama), and that it returns top 5 pools per chain. No annotations provided, so description carries full burden; it adds meaningful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise at 4 sentences with front-loaded key info (cost, purpose). Could be slightly more structured (e.g., bullet points) for easier scanning, but every sentence is value-adding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main aspects: chains, return fields, data source, caching, payment. Lacks exact return format (e.g., JSON structure) and does not address error scenarios, but is adequate for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist (0 params, 100% schema coverage). The description fully compensates by explaining what the tool does, what it returns, and the payment requirement, providing all necessary semantic context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it provides live yield comparison across 5 chains, returning top 5 pools with APY, TVL, and project name. However, it does not differentiate from the sibling tool 'askew_yields_preview', which may serve a similar role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates best use case ('quick yield scanning before moving capital') and mentions payment requirement, but does not explicitly state when not to use or provide alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askew_yields_previewAInspect
Free preview of top DeFi yield pool per chain across Solana, Cosmos, Ethereum, Base, and Arbitrum. Returns the single best APY option per chain. Use before buying askew_yields for a full comparison. No payment required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the action (preview, top yield, no payment) but lacks disclosure of data freshness, update frequency, rate limits, or behavior when no pools are found. Adequate for a simple read-only tool but could be more transparent about limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. Front-loaded with the key action and scope, then usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description covers the essential purpose and usage. However, it lacks details on output format, real-time nature, or any prerequisites. Still reasonably complete for a simple preview tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has zero parameters, so schema coverage is 100%. Description adds value by explaining what the tool returns (best APY per chain) without needing parameter details. Baseline for zero params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a free preview of the top DeFi yield pool per chain across specifically named chains (Solana, Cosmos, Ethereum, Base, Arbitrum), returning the single best APY option per chain. This distinguishes it from sibling tools like askew_yields which offers a full comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use before buying askew_yields for a full comparison' and 'No payment required.' This gives clear when-to-use context and names the alternative tool (askew_yields) for a more comprehensive analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!