Skip to main content
Glama

Server Details

Source-first URL clone, capture, rebuild, and fidelity verification tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
jongko54/webEmbedding
GitHub Stars
2

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 6 of 6 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: classifying modes, detecting runtime, discovering candidates, generating snippets, inspecting URLs, and planning paths. No significant overlap.

Naming Consistency5/5

All tools follow a consistent verb_noun_pattern with lowercase and underscores, e.g., classify_clone_mode, inspect_url.

Tool Count5/5

6 tools is well-scoped for the domain of web embedding and reproduction planning, covering essential operations without excess or deficiency.

Completeness4/5

Core workflow is covered: classification, detection, discovery, snippet generation, inspection, and planning. Minor gaps include lack of direct execution or frameability verification tooling.

Available Tools

6 tools
classify_clone_modeClassify Clone ModeC
Read-onlyIdempotent
Inspect

Decide whether a reference should be embedded, sourced, locally captured, bounded-rebuilt, or blocked before reproduction.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidatesNo
license_textNo
site_profileNo
source_signalsNo
exact_requestedNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only (readOnlyHint=true), idempotent (idempotentHint=true), and non-destructive (destructiveHint=false) behavior. The description adds that the tool makes a classification decision, but does not explain how parameters influence the decision or any side effects. It provides modest added value over annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loading the core action and outcomes. However, its brevity sacrifices necessary detail; a slightly longer description that explains parameter roles would be more helpful without being overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema, the description fails to cover the input parameters, which are all optional but undocumented. The tool has 5 parameters with nested objects, yet the description gives no clue about their semantics or how to use them. This is a significant gap for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must explain the parameters, but it only mentions 'reference' in the output modes. No guidance is given for 'candidates', 'license_text', 'site_profile', 'source_signals', or 'exact_requested'. An agent cannot determine how to set these parameters from the description alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: deciding a clone mode among five specified options. It uses a specific verb 'Decide' and lists the possible outcomes, making the purpose evident. However, it does not differentiate from sibling tools explicitly, but the title and context imply a classification role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings. It does not mention prerequisites, typical workflow placement, or conditions for each mode. An agent would need additional context to decide when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_runtime_capabilitiesDetect Hosted Runtime CapabilitiesA
Read-onlyIdempotent
Inspect

Report the hosted Apps SDK intake runtime capabilities and explain when the local stdio MCP is required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so safety is clear. Description adds specific behavioral context: it reports capabilities and explains a condition (when local stdio MCP is required). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single concise sentence (17 words) that front-loads the action and resource. Every word contributes meaning; no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters, clear annotations, and an output schema existing, the description fully informs the agent of what the tool does: reports capabilities and explains a condition. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has no parameters and schema description coverage is 100%. Baseline for 0 params is 4; description does not need to add parameter info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs 'report' and 'explain' with clear resource 'hosted Apps SDK intake runtime capabilities' and purpose. It distinctly differentiates from siblings like classify_clone_mode or inspect_url, which have different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use: to understand runtime capabilities and when local stdio MCP is needed. It provides clear context but does not explicitly state when not to use or compare alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_embed_candidatesDiscover Embed CandidatesA
Read-onlyIdempotent
Inspect

Extract likely embed, preview, viewer, remix, and source URLs from a public or user-authorized page.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
timeout_secondsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds the context of 'public or user-authorized page', but does not elaborate on response size, limiting factors, or other behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys the tool's function without extraneous words. It is front-loaded but could be slightly more structured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description covers the basic purpose. However, it lacks parameter documentation and does not fully utilize the context from annotations or output schema to provide a complete picture for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should compensate by explaining parameters. However, it does not mention the 'url' or 'timeout_seconds' parameters, leaving their purpose and usage unclear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'extract' and the resource 'likely embed, preview, viewer, remix, and source URLs from a public or user-authorized page', providing specificity and distinguishing it from sibling tools like inspect_url and generate_embed_snippet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for fetching embed-related URLs from a page, but it does not provide explicit guidance on when to use it versus alternatives, nor does it mention conditions for use or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_embed_snippetGenerate Embed SnippetA
Read-onlyIdempotent
Inspect

Generate an iframe snippet for a known frameable and authorized URL. Does not verify frameability by itself.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
titleNo
frameworkNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, idempotent, non-destructive behavior. The description adds a critical behavioral detail: the tool does not verify frameability, which is a key limitation beyond what annotations capture. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no extraneous words. Every sentence adds value, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters (with low schema coverage), an output schema exists, but the description omits explanations for 'title' and 'framework'. It covers the core behavior but lacks completeness for parameter semantics. Adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for the 3 parameters. The description implies the 'url' parameter but does not explain 'title' or 'framework' (enum). It fails to add meaning beyond the schema for the optional parameters, leaving the agent uninformed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('generate an iframe snippet'), the input ('known frameable and authorized URL'), and distinguishes it from siblings like discover_embed_candidates which finds frameable URLs. The explicit note about not verifying frameability adds clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It specifies when to use ('for a known frameable and authorized URL') and when not to ('does not verify frameability'). While it doesn't name alternatives, the sibling tool list provides context, making the usage context clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

inspect_urlInspect URL Reuse RouteA
Read-onlyIdempotent
Inspect

Fetch a public or user-authorized URL and inspect title, metadata, frame policy, and likely source/embed candidates. Does not capture screenshots or persist artifacts.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
timeout_secondsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, and destructiveHint. The description adds specific behaviors: fetches the URL, inspects multiple attributes, and does not persist artifacts. This adds context about the actual operation and output scope, going beyond annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences with no unnecessary words. It front-loads the action and lists key outputs immediately, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich annotations and the presence of an output schema (which likely covers return values), the description is sufficiently complete. It explains what the tool inspects, preconditions, and limits. It could add more about error handling or rate limits, but for a read-only tool with good annotations, this is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It only implicitly explains the 'url' parameter by saying 'fetch a public or user-authorized URL', but the 'timeout_seconds' parameter is not mentioned at all. The description provides minimal addition for the main parameter but lacks details on constraints or format, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'fetch' and 'inspect' applied to a URL resource, and lists specific outputs (title, metadata, frame policy, candidates). It distinguishes from sibling tools by explicitly stating what it does not do (screenshots, persist artifacts), which helps separate it from related tools like generate_embed_snippet or plan_reproduction_path.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies that the URL must be 'public or user-authorized', giving a precondition. It also clearly states what the tool does not do ('does not capture screenshots or persist artifacts'), indirectly guiding when not to use it. However, it does not explicitly mention alternatives among sibling tools, so a 4 is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

plan_reproduction_pathPlan Reproduction PathB
Read-onlyIdempotent
Inspect

Create a source-first plan that separates exact embed/source reuse from local capture and bounded rebuild work.

ParametersJSON Schema
NameRequiredDescriptionDefault
candidatesNo
license_textNo
site_profileNo
capture_bundleNo
source_signalsNo
exact_requestedNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by explaining that the output involves separating reuse from capture/rebuild, which gives insight into the plan's structure. However, it does not disclose additional behavioral traits like potential computational cost or required permissions beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys the core purpose without any wasted words. It is front-loaded with the main action and directly states the key distinction of the plan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects, output schema) and the complete lack of parameter descriptions, the description is far from complete. It does not place the tool in the broader workflow or explain what inputs it expects, making it insufficient for an agent to use correctly without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 6 parameters (candidates, license_text, site_profile, capture_bundle, source_signals, exact_requested) with 0% description coverage. The tool description does not explain any of these parameters, leaving an agent without guidance on what to provide. This severely impairs correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: creating a source-first plan that separates exact reuse from local capture and bounded rebuild work. It provides a specific verb ('Create') and resource ('reproduction path'), and distinguishes itself from sibling tools like classify_clone_mode or discover_embed_candidates which focus on different aspects of the reproduction workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus its siblings. It lacks explicit context about prerequisites, typical workflow placement (e.g., after discover_embed_candidates), or situations where alternative tools should be chosen. No when-to-use or when-not-to-use information is included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.