Skip to main content
Glama

Server Details

A simple tool that performs a fetch request to a webpage.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
smithery-ai/mcp-servers
GitHub Stars
95

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

3 tools
extract_elementsBInspect

Extract specific elements from a web page using CSS selectors.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to fetch
limitNoMaximum number of elements to return
selectorYesCSS selector to find elements (e.g., 'img', '.class', '#id', 'link[rel*="icon"]')
attributeNoOptional attribute to extract from elements (e.g., 'href', 'src', 'alt')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose behavioral traits such as whether the operation is read-only, what the return format looks like (text, HTML, JSON?), error handling for invalid URLs or selectors, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded and efficient. No redundant or filler text—every word serves to define the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 4-parameter tool with 100% schema coverage, but lacks description of output format (especially since no output schema exists) and error handling behavior for web scraping edge cases (invalid URLs, network timeouts).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 4 parameters (url, selector, limit, attribute) fully documented in the schema. The description adds no additional parameter semantics, but baseline 3 is appropriate when the schema carries the full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (Extract), target (elements from a web page), and mechanism (CSS selectors). The mention of CSS selectors implicitly distinguishes it from sibling tools like fetch_url or get_page_metadata, though it doesn't explicitly reference them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this tool versus fetch_url or get_page_metadata. While 'CSS selectors' implies targeted extraction use cases, there are no when/when-not instructions or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_urlCInspect

Fetch a URL and return basic information about the page.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to fetch
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose whether this follows redirects, handles JavaScript rendering, timeout behavior, rate limits, or what specific data constitutes 'basic information' (headers vs body vs status code).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundant words. The primary action ('Fetch a URL') is front-loaded, and the sentence earns its place by conveying both the action and the return type, however vague.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool, the description is minimally viable but has clear gaps. Without an output schema, it should specify what 'basic information' includes to distinguish from siblings. It leaves the agent guessing whether the output is structured metadata or raw content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('The URL to fetch'), the schema adequately documents the single parameter. The description adds no additional semantic context (e.g., protocol requirements, format constraints), warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb ('Fetch') and resource ('URL'), but 'return basic information about the page' is vague. It fails to differentiate from sibling 'get_page_metadata'—it's unclear whether this returns raw HTML, rendered content, headers, or a summary versus the sibling's likely focus on meta tags.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'extract_elements' or 'get_page_metadata'. No mention of prerequisites (valid URL format), error conditions, or when fetching might be preferred over extraction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_page_metadataAInspect

Extract comprehensive metadata from a web page including title, description, Open Graph tags, Twitter cards, and other meta information.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to analyze
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing full burden on the description. While it lists what metadata is extracted, it omits critical behavioral traits: whether the tool follows redirects, handles JavaScript-rendered pages, rate limits, error conditions (404s, timeouts), or the output structure/format. The term 'comprehensive' hints at coverage but lacks concrete behavioral guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, well-structured sentence that front-loads the core action ('Extract comprehensive metadata') and efficiently enumerates specific metadata categories without redundancy. Every word serves a purpose; zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool, the description adequately explains the function but fails to compensate for the missing output schema. It does not describe the return format (structured object vs flat dictionary), field naming conventions, or behavior when metadata is missing. Sufficient for basic selection but incomplete for invocation confidence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('The URL to analyze'), the baseline is 3. The description adds minimal context by specifying 'web page' (implying HTML documents rather than arbitrary URLs), but provides no additional details about URL format requirements, validation rules, or whether relative URLs are accepted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Extract' with clear resource 'metadata from a web page' and enumerates specific metadata types (title, description, Open Graph tags, Twitter cards). This specificity implicitly distinguishes it from sibling tools fetch_url (likely raw content) and extract_elements (likely DOM elements) by focusing exclusively on meta tag extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the specificity of extracted metadata, suggesting use when Open Graph, Twitter cards, or SEO meta tags are needed. However, it lacks explicit guidance on when to prefer this over fetch_url or extract_elements, and states no prerequisites (e.g., public URLs vs authenticated pages).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.