webEmbedding
Server Details
Source-first URL clone, capture, rebuild, and fidelity verification tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- jongko54/webEmbedding
- GitHub Stars
- 2
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 6 of 6 tools scored. Lowest: 2.7/5.
Each tool has a distinct purpose: classifying modes, detecting runtime, discovering candidates, generating snippets, inspecting URLs, and planning paths. No significant overlap.
All tools follow a consistent verb_noun_pattern with lowercase and underscores, e.g., classify_clone_mode, inspect_url.
6 tools is well-scoped for the domain of web embedding and reproduction planning, covering essential operations without excess or deficiency.
Core workflow is covered: classification, detection, discovery, snippet generation, inspection, and planning. Minor gaps include lack of direct execution or frameability verification tooling.
Available Tools
6 toolsclassify_clone_modeClassify Clone ModeCRead-onlyIdempotentInspect
Decide whether a reference should be embedded, sourced, locally captured, bounded-rebuilt, or blocked before reproduction.
| Name | Required | Description | Default |
|---|---|---|---|
| candidates | No | ||
| license_text | No | ||
| site_profile | No | ||
| source_signals | No | ||
| exact_requested | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only (readOnlyHint=true), idempotent (idempotentHint=true), and non-destructive (destructiveHint=false) behavior. The description adds that the tool makes a classification decision, but does not explain how parameters influence the decision or any side effects. It provides modest added value over annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loading the core action and outcomes. However, its brevity sacrifices necessary detail; a slightly longer description that explains parameter roles would be more helpful without being overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description fails to cover the input parameters, which are all optional but undocumented. The tool has 5 parameters with nested objects, yet the description gives no clue about their semantics or how to use them. This is a significant gap for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must explain the parameters, but it only mentions 'reference' in the output modes. No guidance is given for 'candidates', 'license_text', 'site_profile', 'source_signals', or 'exact_requested'. An agent cannot determine how to set these parameters from the description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: deciding a clone mode among five specified options. It uses a specific verb 'Decide' and lists the possible outcomes, making the purpose evident. However, it does not differentiate from sibling tools explicitly, but the title and context imply a classification role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings. It does not mention prerequisites, typical workflow placement, or conditions for each mode. An agent would need additional context to decide when to invoke this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_runtime_capabilitiesDetect Hosted Runtime CapabilitiesARead-onlyIdempotentInspect
Report the hosted Apps SDK intake runtime capabilities and explain when the local stdio MCP is required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so safety is clear. Description adds specific behavioral context: it reports capabilities and explains a condition (when local stdio MCP is required). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence (17 words) that front-loads the action and resource. Every word contributes meaning; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters, clear annotations, and an output schema existing, the description fully informs the agent of what the tool does: reports capabilities and explains a condition. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no parameters and schema description coverage is 100%. Baseline for 0 params is 4; description does not need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs 'report' and 'explain' with clear resource 'hosted Apps SDK intake runtime capabilities' and purpose. It distinctly differentiates from siblings like classify_clone_mode or inspect_url, which have different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use: to understand runtime capabilities and when local stdio MCP is needed. It provides clear context but does not explicitly state when not to use or compare alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_embed_candidatesDiscover Embed CandidatesARead-onlyIdempotentInspect
Extract likely embed, preview, viewer, remix, and source URLs from a public or user-authorized page.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| timeout_seconds | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds the context of 'public or user-authorized page', but does not elaborate on response size, limiting factors, or other behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the tool's function without extraneous words. It is front-loaded but could be slightly more structured for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description covers the basic purpose. However, it lacks parameter documentation and does not fully utilize the context from annotations or output schema to provide a complete picture for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate by explaining parameters. However, it does not mention the 'url' or 'timeout_seconds' parameters, leaving their purpose and usage unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'extract' and the resource 'likely embed, preview, viewer, remix, and source URLs from a public or user-authorized page', providing specificity and distinguishing it from sibling tools like inspect_url and generate_embed_snippet.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for fetching embed-related URLs from a page, but it does not provide explicit guidance on when to use it versus alternatives, nor does it mention conditions for use or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_embed_snippetGenerate Embed SnippetARead-onlyIdempotentInspect
Generate an iframe snippet for a known frameable and authorized URL. Does not verify frameability by itself.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| title | No | ||
| framework | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, idempotent, non-destructive behavior. The description adds a critical behavioral detail: the tool does not verify frameability, which is a key limitation beyond what annotations capture. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no extraneous words. Every sentence adds value, making it concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters (with low schema coverage), an output schema exists, but the description omits explanations for 'title' and 'framework'. It covers the core behavior but lacks completeness for parameter semantics. Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% for the 3 parameters. The description implies the 'url' parameter but does not explain 'title' or 'framework' (enum). It fails to add meaning beyond the schema for the optional parameters, leaving the agent uninformed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('generate an iframe snippet'), the input ('known frameable and authorized URL'), and distinguishes it from siblings like discover_embed_candidates which finds frameable URLs. The explicit note about not verifying frameability adds clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It specifies when to use ('for a known frameable and authorized URL') and when not to ('does not verify frameability'). While it doesn't name alternatives, the sibling tool list provides context, making the usage context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspect_urlInspect URL Reuse RouteARead-onlyIdempotentInspect
Fetch a public or user-authorized URL and inspect title, metadata, frame policy, and likely source/embed candidates. Does not capture screenshots or persist artifacts.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| timeout_seconds | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint. The description adds specific behaviors: fetches the URL, inspects multiple attributes, and does not persist artifacts. This adds context about the actual operation and output scope, going beyond annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with no unnecessary words. It front-loads the action and lists key outputs immediately, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations and the presence of an output schema (which likely covers return values), the description is sufficiently complete. It explains what the tool inspects, preconditions, and limits. It could add more about error handling or rate limits, but for a read-only tool with good annotations, this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only implicitly explains the 'url' parameter by saying 'fetch a public or user-authorized URL', but the 'timeout_seconds' parameter is not mentioned at all. The description provides minimal addition for the main parameter but lacks details on constraints or format, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'fetch' and 'inspect' applied to a URL resource, and lists specific outputs (title, metadata, frame policy, candidates). It distinguishes from sibling tools by explicitly stating what it does not do (screenshots, persist artifacts), which helps separate it from related tools like generate_embed_snippet or plan_reproduction_path.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies that the URL must be 'public or user-authorized', giving a precondition. It also clearly states what the tool does not do ('does not capture screenshots or persist artifacts'), indirectly guiding when not to use it. However, it does not explicitly mention alternatives among sibling tools, so a 4 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
plan_reproduction_pathPlan Reproduction PathBRead-onlyIdempotentInspect
Create a source-first plan that separates exact embed/source reuse from local capture and bounded rebuild work.
| Name | Required | Description | Default |
|---|---|---|---|
| candidates | No | ||
| license_text | No | ||
| site_profile | No | ||
| capture_bundle | No | ||
| source_signals | No | ||
| exact_requested | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by explaining that the output involves separating reuse from capture/rebuild, which gives insight into the plan's structure. However, it does not disclose additional behavioral traits like potential computational cost or required permissions beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys the core purpose without any wasted words. It is front-loaded with the main action and directly states the key distinction of the plan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, nested objects, output schema) and the complete lack of parameter descriptions, the description is far from complete. It does not place the tool in the broader workflow or explain what inputs it expects, making it insufficient for an agent to use correctly without external knowledge.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 6 parameters (candidates, license_text, site_profile, capture_bundle, source_signals, exact_requested) with 0% description coverage. The tool description does not explain any of these parameters, leaving an agent without guidance on what to provide. This severely impairs correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: creating a source-first plan that separates exact reuse from local capture and bounded rebuild work. It provides a specific verb ('Create') and resource ('reproduction path'), and distinguishes itself from sibling tools like classify_clone_mode or discover_embed_candidates which focus on different aspects of the reproduction workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus its siblings. It lacks explicit context about prerequisites, typical workflow placement (e.g., after discover_embed_candidates), or situations where alternative tools should be chosen. No when-to-use or when-not-to-use information is included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!