Skip to main content
Glama

LinkPulse

Server Details

URL reality check for agents: status, content hash, classification, wayback fallback.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
walkojas-boop/linkpulse
GitHub Stars
0
Server Listing
LinkPulse

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: linkpulse_check performs a full fetch and analysis, linkpulse_classify works on existing data without fetching, linkpulse_diff detects content changes, and linkpulse_resolve handles redirects only. The descriptions explicitly differentiate their use cases, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent pattern of 'linkpulse_' prefix followed by a descriptive verb (check, classify, diff, resolve). This uniform naming scheme makes the tools easily identifiable and predictable within the set.

Tool Count5/5

With 4 tools, the set is well-scoped for the domain of URL analysis and monitoring. Each tool serves a specific, necessary function (fetching, classifying, diffing, resolving), and there are no extraneous or redundant tools, making the count appropriate for the server's purpose.

Completeness5/5

The toolset provides complete coverage for URL analysis workflows: it supports fetching and analyzing URLs (linkpulse_check), classifying existing content (linkpulse_classify), detecting changes (linkpulse_diff), and handling redirects (linkpulse_resolve). There are no obvious gaps, as all core operations for monitoring and analyzing URLs are included.

Available Tools

4 tools
linkpulse_checkAInspect

Fetch a URL and return its current state: HTTP status, final URL after redirects, SHA-256 content hash, classification, readability score, title/meta-description, and wayback archive fallback if dead. Cached 10 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
max_bytesNo
timeout_msNo
force_freshNo
include_body_sampleNoIf true, return first 1KB of body text.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses caching behavior ('cached 10 minutes'), fallback mechanisms ('wayback archive fallback if dead'), and output details like HTTP status and content hash. However, it does not mention rate limits, authentication needs, or error handling, leaving some gaps for a tool with network operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose, followed by specific output details and behavioral notes. Every sentence earns its place by adding value: the first states the action and key outputs, the second adds caching and fallback info. It is appropriately sized with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by covering the tool's purpose, key outputs, and some behavior like caching. However, for a tool with 5 parameters and network operations, it could benefit from more details on error cases or response structure, though it is largely complete for basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 20%, with only one parameter ('include_body_sample') having a description. The description compensates by explaining the core parameter 'url' and implying usage of others through context (e.g., 'max_bytes' relates to content hash, 'timeout_ms' to fetching). It adds meaning beyond the schema, though not all parameters are explicitly detailed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('fetch') and resource ('URL'), specifying what the tool does: retrieve a URL's current state with multiple metrics. It distinguishes from siblings like linkpulse_classify (classification only), linkpulse_diff (comparison), and linkpulse_resolve (likely URL resolution), making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking URL status and content, but does not explicitly state when to use this tool versus alternatives like linkpulse_classify or linkpulse_diff. It mentions a fallback to wayback archive for dead links, which provides some context, but lacks clear guidance on exclusions or specific scenarios favoring this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

linkpulse_classifyBInspect

Classify content you already have (status code + content-type + body sample) without an outbound fetch. Useful if your agent already fetched the URL via another tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusYes
body_sampleYes
content_typeYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool classifies content without fetching, which implies a read-only, non-destructive operation, but doesn't specify what 'classify' entails (e.g., categories, confidence scores), potential rate limits, or authentication needs. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that directly address purpose and usage without any wasted words. Every sentence earns its place by providing essential information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is incomplete. It lacks details on what 'classify' means in terms of output, parameter constraints, and behavioral traits like error handling or performance. Without an output schema, the description should ideally hint at return values, but it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It lists the three required parameters (status code, content-type, body sample) but doesn't explain their semantics, formats, or constraints (e.g., valid status codes, content-type formats, body sample size limits). This adds minimal value beyond the schema's structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Classify content you already have' with specific inputs (status code + content-type + body sample). It distinguishes itself from potential fetch operations by emphasizing 'without an outbound fetch,' though it doesn't explicitly differentiate from sibling tools like linkpulse_check or linkpulse_diff.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Useful if your agent already fetched the URL via another tool.' This implies an alternative workflow where fetching is done separately, but it doesn't explicitly state when not to use it or name specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

linkpulse_diffAInspect

Re-check a URL and tell you whether its content hash has changed since a previous hash you supply. Returns boolean + new hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
previous_hashYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the core behavior (hash comparison) and return format (boolean + new hash), but doesn't mention error handling, rate limits, authentication needs, or what happens if the URL is inaccessible. It adds basic context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and return value with zero wasted words. Every element (action, parameters, output) earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 2 parameters with 0% schema coverage, the description provides the minimum viable context: purpose, parameters, and return format. However, it lacks details on error cases, performance characteristics, or integration with sibling tools, leaving gaps for a mutation-like comparison tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It explains the meaning of both parameters ('url' to re-check, 'previous_hash' to compare against) beyond their schema types, though it doesn't specify hash algorithm or format details. For 2 parameters, this provides good semantic clarification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Re-check a URL', 'tell you whether its content hash has changed') and resource ('URL'). It distinguishes from siblings by focusing on hash comparison rather than checking, classifying, or resolving URLs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a previous hash to compare against, but doesn't explicitly state when to use this tool versus alternatives like 'linkpulse_check' or provide exclusions. The context is clear but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

linkpulse_resolveAInspect

Resolve redirect chain for a URL without downloading the full body. Returns final URL + redirect count.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool resolves redirects, doesn't download full bodies, and returns final URL + redirect count. However, it lacks details on error handling, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by method and return details. Every sentence adds value with zero waste, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (redirect resolution), no annotations, and no output schema, the description is reasonably complete. It covers purpose, method, and return values, though it could benefit from more behavioral context like error cases or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the description compensates by explaining the parameter's purpose ('for a URL') and the tool's function. Since there's only one parameter (url), the description provides adequate context without needing detailed param semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Resolve redirect chain'), the resource ('for a URL'), and the method ('without downloading the full body'). It distinguishes from siblings by focusing on redirect resolution rather than checking, classifying, or diffing URLs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'without downloading the full body', suggesting efficiency for redirect tracing. However, it doesn't explicitly state when to use this tool versus alternatives like linkpulse_check or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.