Skip to main content
Glama
Ownership verified

Server Details

Make money with x402. See which services are earning, find profitable gaps, and price your API right — backed by real on-chain payment data from 90K+ services and $46M+ in volume.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
analyze_serviceAnalyze ServiceAInspect

Deep competitive analysis of any service — composite score, competitive percentile, revenue data, top competitors, and strategic recommendations. Pass a domain name, URL, or service ID. Preview is free — $5 USDC via x402 unlocks the full analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesService ID, domain name (e.g., 'heurist.ai'), or URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals important behavioral traits: the tool has a freemium model (free preview, $5 paid upgrade), requires payment for full analysis, and accepts multiple input formats. However, it doesn't disclose rate limits, authentication requirements, or what exactly constitutes the 'preview' versus 'full analysis' in terms of data returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first clause, followed by specific outputs, usage instructions, and pricing details. Every sentence earns its place with essential information, and there's zero wasted verbiage in just two well-structured sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides adequate coverage of purpose, usage, and behavioral constraints (pricing model). However, it doesn't describe the output format or structure, which is a significant gap given the complex analysis promised (composite scores, percentiles, revenue data, recommendations). The description compensates somewhat by listing what will be returned but not how.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage and only 1 parameter, the schema already documents the 'query' parameter well. The description adds marginal value by providing examples ('e.g., 'heurist.ai') and clarifying the three acceptable input types (domain name, URL, or service ID), but doesn't fundamentally change the parameter understanding beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Deep competitive analysis of any service' and lists specific outputs (composite score, competitive percentile, revenue data, top competitors, strategic recommendations). It distinguishes itself from siblings like 'find_opportunities' or 'market_overview' by focusing on individual service analysis rather than broader market insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Pass a domain name, URL, or service ID') and mentions a free preview with paid upgrade option. However, it doesn't explicitly state when NOT to use it or directly compare it to sibling tools like 'search_services' or 'suggest_pricing' that might serve related but different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_opportunitiesFind OpportunitiesBInspect

Identify the most profitable gaps in the agent economy — specific niches, proven price points, competitor counts, and build ideas backed by real revenue data. The kind of insight you'd need weeks of research to find. Preview is free — $25 USDC via x402 unlocks all opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax number of opportunities to return
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals key traits: it's a research tool that provides insights (implied read-only), includes a free preview with paid access ($25 USDC via x402), and delivers data-backed opportunities. However, it lacks details on rate limits, error handling, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first explains the tool's purpose and value, and the second covers access details. It's front-loaded with core functionality, though the promotional tone ('weeks of research') slightly reduces efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema, the description is moderately complete. It covers the tool's purpose and access model but lacks details on output structure, error cases, or integration with sibling tools. For a tool with one parameter and no complex schema, it's adequate but has gaps in behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'limit' parameter fully documented in the schema. The description adds no parameter-specific information beyond what the schema provides, such as how 'limit' affects the quality or type of opportunities returned. Baseline 3 is appropriate given the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool identifies profitable gaps in the agent economy with specific details like niches, price points, competitor counts, and build ideas backed by revenue data. It distinguishes itself from potential siblings by focusing on 'opportunities' rather than analysis or overview, though it doesn't explicitly differentiate from tools like 'search_services' or 'suggest_pricing'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'analyze_service' or 'market_overview'. It mentions a preview and paid unlock, which hints at usage context, but doesn't specify scenarios, prerequisites, or exclusions for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_overviewMarket OverviewBInspect

Complete state of the agent economy — where $148M+ in payments have flowed, every category broken down by revenue, monetization gaps, emerging players, and what's next. Preview is free — $50 USDC via x402 unlocks the full report.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals that the tool has a free preview and a paid full report, indicating potential access restrictions and transactional behavior. However, it lacks details on rate limits, authentication needs, response format, or whether it's read-only or mutative, leaving significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first outlines the tool's content and scope, and the second explains access and pricing. It is front-loaded with key information and avoids unnecessary details, though the promotional tone ('$148M+ in payments') slightly detracts from pure conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (providing an economic overview with breakdowns) and the absence of annotations and output schema, the description is moderately complete. It covers purpose and access model but lacks details on output format, error handling, or integration with sibling tools. For a tool with no structured data support, it should provide more behavioral and contextual guidance to be fully adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately avoids discussing parameters, focusing instead on the tool's purpose and usage context. This aligns with the baseline expectation for zero-parameter tools, where the description should not compensate for missing param info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides a 'complete state of the agent economy' with breakdowns by revenue, monetization gaps, emerging players, and future trends. This gives a general sense of purpose but is somewhat vague about the specific action (e.g., whether it generates, retrieves, or displays this overview) and doesn't clearly distinguish it from sibling tools like 'market_pulse' or 'find_opportunities'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a free preview and a paid unlock ($50 USDC via x402), which implies usage contexts related to cost and access. However, it provides no explicit guidance on when to use this tool versus alternatives like 'analyze_service' or 'search_services', nor does it specify prerequisites or exclusions beyond the payment requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

market_pulseMarket PulseAInspect

Real-time snapshot of the agent economy: top earners with revenue, hottest categories, total volume, and the insights you'd miss looking at raw data. Supports time periods: 1d, 7d, 14d, 30d, or all. Preview is free — $1 USDC via x402 unlocks the full picture.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodNoTime period for revenue data: 1d, 7d, 14d, 30d, or allall
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read-only operation (implied by 'snapshot'), has a pricing model ('preview is free — $1 USDC via x402 unlocks the full picture'), and supports specific time periods. It doesn't mention rate limits, authentication requirements, or data freshness guarantees, but provides substantial behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first clause. The second sentence adds important behavioral context (time periods), and the third provides critical pricing information. While efficient, the marketing language ('insights you'd miss') could be slightly tightened for pure utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description provides good completeness: it explains what information is returned (top earners, categories, volume, insights), mentions the pricing model, and specifies time period support. It doesn't detail the exact structure of returned data or pagination, but given the tool's relative simplicity and lack of output schema, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'period' fully documented in the schema. The description adds minimal value beyond the schema by mentioning the same time periods in the text, but doesn't provide additional semantic context about how different periods affect the data or insights. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('snapshot', 'unlocks') and resources ('agent economy', 'top earners', 'hottest categories', 'total volume', 'insights'). It distinguishes from raw data analysis and provides concrete examples of what information is included, making it highly specific and differentiated from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'real-time snapshot' and 'insights you'd miss looking at raw data', suggesting this is for high-level market analysis rather than detailed investigation. However, it doesn't explicitly state when to use this tool versus alternatives like 'market_overview' or 'find_opportunities', nor does it provide exclusion criteria or direct comparisons with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_servicesSearch ServicesAInspect

Search 100K+ services across the agent economy by category or keyword. Returns scores, metadata, and service IDs you can pass to analyze_service for a deep dive. Free — no payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return
keywordNoSearch by keyword in name, description, or category
categoryNoFilter by category (e.g., 'crypto', 'ai-ml', 'marketing', 'finance', 'developer-tools', 'education', 'communication', 'cloud')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the scale ('100K+ services'), return format ('scores, metadata, and service IDs'), and cost ('Free — no payment required'), but doesn't mention rate limits, authentication requirements, pagination behavior, or what happens when no results are found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three sentences that each earn their place: establishes scope and method, describes returns and next steps, and clarifies cost. It's front-loaded with the core functionality and wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides adequate context about what the tool does and returns, but lacks details about behavioral constraints, error conditions, or result format specifics. The mention of 'analyze_service' as a follow-up helps, but more operational context would be beneficial given the absence of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description mentions searching 'by category or keyword' which aligns with the schema, but doesn't add meaningful semantic context beyond what's already in the parameter descriptions. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches services across the agent economy by category or keyword, which is a specific verb+resource combination. It distinguishes from sibling 'analyze_service' by mentioning that search returns IDs for further analysis, but doesn't differentiate from other search-related siblings like 'find_opportunities' or 'market_overview'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool - for searching services by category or keyword - and explicitly mentions the alternative 'analyze_service' for deep dives. However, it doesn't specify when NOT to use this tool or how it differs from other search-related siblings like 'find_opportunities' or 'market_overview'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_pricingSuggest PricingAInspect

Revenue-backed pricing for what you're building. Describe your service in plain language and get a recommended price point based on what comparable services actually charge and earn. Preview is free — $10 USDC via x402 unlocks the full recommendation with comparables.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesWhat you're building — natural language (e.g., 'wallet security scanner', 'DeFi analytics dashboard', 'token risk assessment API') or category (e.g., 'crypto', 'ai-ml')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it offers a free preview and a paid full recommendation ($10 USDC via x402), which clarifies cost and access requirements. However, it does not mention rate limits, authentication needs, or detailed output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey the tool's purpose, usage, and cost structure without any wasted words. Every sentence adds value, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and behavioral aspects like cost, but lacks details on output format or error handling. Without annotations or output schema, it could benefit from more on what the recommendation includes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents the single parameter 'query'. The description adds some context by explaining what to provide ('Describe your service in plain language') and giving examples, but it does not add significant meaning beyond the schema's description. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get a recommended price point') and resources ('what you're building'), distinguishing it from siblings by focusing on pricing recommendations rather than analysis, finding opportunities, or market overviews. It specifies the basis for recommendations ('based on what comparable services actually charge and earn').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Describe your service in plain language and get a recommended price point'), but it does not explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage for pricing advice rather than other market functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources