SteadyFetch
Server Details
Reliable web fetching for AI agents with retry, circuit breaker, caching, and anti-bot bypass
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolscache_statsAInspect
Get cache statistics — size and item count.
Useful for monitoring cache utilization and deciding when to clear.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies what metrics are returned (size, item count) but omits safety characteristics (read-only status), performance implications, or whether the operation impacts cache state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core functionality, while the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately avoids detailing return values. It adequately covers the tool's purpose and basic usage context for a zero-parameter statistics tool, though it could benefit from explicit safety confirmation given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, establishing a baseline score of 4 per evaluation rules. No parameter documentation is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action ('Get'), resource ('cache statistics'), and specific scope ('size and item count'). It clearly distinguishes from sibling 'clear_cache' by indicating this is a retrieval operation rather than a destructive one.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides implied usage context (' Useful for monitoring... and deciding when to clear'), hinting at a workflow involving the sibling clear_cache tool. However, it does not explicitly name the alternative tool or provide explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_domainAInspect
Check the health status of a domain.
Returns the circuit breaker state: 'closed' (healthy), 'open' (failing),
or 'half_open' (testing recovery). Use this before batch operations to
avoid wasting time on domains that are down.
Args:
domain: The domain to check (e.g., 'example.com')| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the three possible return states ('closed', 'open', 'half_open') and their semantic meanings (healthy, failing, testing recovery). Could improve by explicitly stating this is a read-only/safe operation, but 'Check' and 'Returns' language strongly implies non-destructive behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with zero waste: sentence 1 states purpose, sentence 2 details return values, sentence 3 gives usage context, and the Args block documents the parameter. Every sentence earns its place and information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple single-parameter tool. Despite lack of annotations, the description covers the circuit breaker behavioral model. Since an output schema exists, the brief summary of return states is sufficient complementary information rather than redundant.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (parameter has no description field). The description compensates effectively by providing the Args section with an example value ('example.com') and clarifying it is 'The domain to check'. Slight deduction for not specifying format constraints (e.g., whether protocol is allowed).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Check' and resource 'health status of a domain'. Clearly distinguishes from siblings (cache_stats, clear_cache, fetch_markdown, fetch_url) by focusing on domain health/circuit breaker monitoring rather than content retrieval or cache management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance: 'Use this before batch operations to avoid wasting time on domains that are down'. Lacks explicit 'when not to use' or named alternatives, but the batch operation context strongly implies intended use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clear_cacheAInspect
Clear the entire fetch cache.
Use when you need fresh data and don't want to rely on cached results.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It successfully discloses scope ('entire' cache), but fails to mention that this is a destructive/irreversible operation, lacks warnings about performance implications for subsequent calls, or notes any authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two high-value sentences with zero redundancy. The first front-loads the core action, while the second immediately provides usage context. No words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the output schema covers return values and the parameter schema is trivial, the description misses contextual relationships to sibling fetch tools (fetch_url, fetch_markdown) that populate this cache. For a destructive global operation, it should also warn about impact on concurrent operations or immediate performance degradation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately does not introduce phantom parameters, maintaining consistency with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Clear') and resource ('entire fetch cache'), clearly distinguishing this as a cache management operation distinct from sibling tools like cache_stats (monitoring) and fetch_url (data retrieval). The scope 'entire' clarifies this is a global operation, not selective.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides explicit positive guidance ('Use when you need fresh data...'), clearly indicating the trigger condition for invocation. However, it lacks negative constraints (when not to use) or explicit comparison to alternatives like simply bypassing cache on individual fetch calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_markdownAInspect
Fetch a URL and return clean markdown text optimized for LLM consumption.
Same reliability as fetch_url but returns only the markdown content,
stripping HTML, scripts, and noise. Best for when you need the page
content for analysis, summarization, or data extraction.
Args:
url: The URL to fetch
use_cache: Whether to use cached results (default: true)
wait_for: CSS selector to wait for before capturing| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| wait_for | No | ||
| use_cache | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses content transformation ('stripping HTML, scripts, and noise') and optimization goal ('for LLM consumption'). However, it lacks operational details like error handling, authentication requirements, or rate limits expected for a network fetch tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with core purpose, followed by sibling differentiation, use cases, and parameter documentation. Every sentence earns its place; the Args section is necessary given the empty schema descriptions. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (3 simple parameters) and existence of an output schema, the description is nearly complete. It covers parameters (essential due to 0% schema coverage), behavioral traits, and sibling context. Minor deduction for missing error handling documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage (only titles). The Args section compensates by documenting all 3 parameters: 'url' purpose, 'use_cache' boolean behavior with default, and 'wait_for' as a CSS selector. This fully covers the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb+resource ('Fetch a URL') and explicit output format ('return clean markdown text'). It effectively distinguishes from sibling 'fetch_url' by stating it 'returns only the markdown content, stripping HTML, scripts, and noise' and noting 'Same reliability as fetch_url'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear use cases ('Best for when you need the page content for analysis, summarization, or data extraction') and implicitly guides selection versus 'fetch_url' by contrasting output formats. Lacks explicit 'when not to use' guidance (e.g., when HTML structure is needed) to earn a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_urlAInspect
Fetch a URL with full reliability — retry, circuit breaker, cache, and anti-bot bypass.
Returns both raw HTML and clean markdown. Automatically retries on failure
with exponential backoff, falls back to plain HTTP if browser fetch fails,
and circuit-breaks domains that are consistently down.
Args:
url: The URL to fetch
use_cache: Whether to use cached results (default: true, TTL 1 hour)
js_render: Whether to render JavaScript (default: true, disable for speed)
wait_for: CSS selector to wait for before capturing (e.g., '.results-loaded')| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| wait_for | No | ||
| js_render | No | ||
| use_cache | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and excels: it details retry logic (exponential backoff), fallback mechanisms (plain HTTP if browser fails), circuit-breaking behavior, anti-bot bypass capabilities, and cache TTL (1 hour). No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with high information density. The first sentence establishes reliability features, the second declares output formats, the third explains resilience behavior, and the Args section provides structured parameter documentation. No sentences are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 4 parameters and complex behavior (retries, caching, JS rendering), the description is comprehensive. Since an output schema exists, the brief mention of return formats ('raw HTML and clean markdown') is sufficient without detailing the full structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage (properties only have titles), the description fully compensates by documenting all four parameters with semantics: url purpose, cache TTL and default, JS rendering trade-offs, and wait_for syntax with a concrete CSS selector example. It adds significant value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Fetch') and resource ('URL'), immediately distinguishing this from siblings like cache_stats or check_domain. It further differentiates from fetch_markdown by explicitly stating it returns 'both raw HTML and clean markdown,' clarifying its unique value proposition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear operational guidance, such as disabling js_render 'for speed' and explaining the wait_for parameter for dynamic content. However, it lacks explicit comparison to the sibling tool fetch_markdown (e.g., stating when to prefer one over the other) and does not mention prerequisites or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!