PageDrop
Server Details
Free instant HTML hosting API. Deploy HTML pages, upload files, or extract ZIP archives with automatic TTL expiry and delete tokens. No API keys required.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose targeting specific operations in the site deployment lifecycle. Tools like deploy_html, batch_deploy, fork_site, update_site, delete_site, batch_delete, get_site_info, and get_site_analytics all address unique actions with no functional overlap, making tool selection unambiguous.
All tool names follow a consistent verb_noun pattern with snake_case formatting throughout. The naming convention is perfectly uniform, using clear action verbs (deploy, delete, fork, get, update) paired with specific nouns (html, site, analytics, info), creating a predictable and readable toolset.
With 8 tools, this server is well-scoped for its site deployment and management domain. The count is ideal, covering core operations without bloat, and each tool serves a distinct, necessary function in the workflow from creation to deletion and analytics.
The toolset provides complete CRUD and lifecycle coverage for site management, including creation (deploy_html, batch_deploy), reading (get_site_info, get_site_analytics), updating (update_site), deletion (delete_site, batch_delete), and forking (fork_site). There are no obvious gaps, enabling agents to handle all typical workflows without dead ends.
Available Tools
8 toolsbatch_deleteAInspect
Delete multiple sites at once. Provide an array of site IDs and their delete tokens. Max 50 per request.
| Name | Required | Description | Default |
|---|---|---|---|
| sites | Yes | Array of sites to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the 'Max 50 per request' limit and emphasizes the requirement for 'delete tokens', but omits critical safety information like permanence of deletion or partial failure behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is perfectly economical: action definition, input requirements, and operational constraint. Every sentence earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and lack of output schema, the description adequately covers the input requirements. However, for a destructive batch operation without safety annotations, it should explicitly warn about irreversibility or authentication prerequisites.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds value by specifying the 'Max 50 per request' constraint not present in the schema, and reinforces the required array structure of site IDs paired with delete tokens.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Delete), resource (sites), and batch scope (multiple at once), effectively distinguishing it from the singular sibling tool 'delete_site'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies batch usage through 'multiple' and 'array', it lacks explicit guidance on when to prefer this over 'delete_site' or warnings about the irreversible nature of batch operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
batch_deployAInspect
Deploy multiple HTML or Markdown pages in a single request. Returns an array of site IDs, URLs, and delete tokens. Max 20 per request. Supports html or markdown content per entry.
| Name | Required | Description | Default |
|---|---|---|---|
| sites | Yes | Array of sites to deploy (max 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses return values ('array of site IDs, URLs, and delete tokens') and limits. However, it omits mutation safety details, atomicity guarantees, or whether deployments are immediately public.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences total with zero waste. Front-loaded with the core action ('Deploy multiple...'), followed by return values, limits, and content constraints. Every clause provides distinct information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by documenting return structure (IDs, URLs, delete tokens). Covers primary constraints (batch size, content type). Could improve by mentioning TTL behavior or delete token importance, but adequate for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds value by emphasizing the html/markdown mutual exclusivity ('use this OR markdown') and highlighting the max 20 constraint, reinforcing critical parameter semantics beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Deploy' with clear resource 'multiple HTML or Markdown pages' and scope 'in a single request'. This effectively distinguishes it from sibling tool deploy_html (single page) and batch_delete (removal).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides quantitative constraint 'Max 20 per request' and content constraints ('html or markdown content per entry'), but lacks explicit guidance on when to choose this over deploy_html or whether partial failures are possible in the batch.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_siteCInspect
Delete a deployed site using its delete token
| Name | Required | Description | Default |
|---|---|---|---|
| siteId | Yes | The site ID to delete | |
| deleteToken | Yes | The delete token returned when the site was created |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the delete token constraint, it fails to disclose that this is a destructive/irreversible operation, lacks safety warnings, and does not describe success/failure behaviors or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at seven words, with the primary verb front-loaded. While efficient, the extreme brevity is insufficient for a destructive operation lacking annotations, leaving out critical safety and behavioral context that would earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a destructive mutation tool with no annotations, no output schema, and specific token-based authentication requirements, the description is incomplete. It lacks warnings about permanent data loss, return value documentation, and distinguishes from sibling bulk operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with both parameters fully documented in the schema. The description mentions 'delete token' which aligns with the schema, but adds no additional semantic context (such as token format, expiration, or where to obtain it) beyond what the structured fields already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Delete'), resource ('deployed site'), and specific mechanism ('using its delete token'). However, it does not explicitly distinguish from the sibling tool 'batch_delete' or indicate this is for single-site vs. bulk deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus 'batch_delete' or prerequisites for deletion. It mentions the delete token requirement but does not clarify that this token originates from the initial deployment.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deploy_htmlAInspect
Deploy HTML or Markdown content and get an instant public URL. Pages are permanent by default. Returns site ID, URL, and delete token. Use either html or markdown parameter (not both).
| Name | Required | Description | Default |
|---|---|---|---|
| ttl | No | Time-to-live in days (omit = never expires) | |
| html | No | HTML content to deploy (use this OR markdown, not both) | |
| slug | No | Custom vanity slug for the URL (e.g. "my-project" → /s/my-project) | |
| title | No | Page title for markdown rendering (auto-detected from first # heading if omitted) | |
| ogImage | No | URL to Open Graph image for social media previews (1200x630px recommended) | |
| ogTitle | No | Open Graph title for social media previews (max 200 chars) | |
| markdown | No | Markdown content to deploy — auto-rendered to a styled HTML page (use this OR html, not both) | |
| password | No | Optional password to protect the page. Visitors must enter this password to view the content. | |
| ogDescription | No | Open Graph description for social media previews (max 500 chars) | |
| passwordExpiry | No | Password cookie expiry in hours (1-720, default 24). Only applies when password is set. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses persistence ('permanent by default'), return values ('site ID, URL, and delete token'), and availability ('instant'). Missing auth/rate limits, but covers key lifecycle behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Purpose first, then persistence behavior, then return values, then parameter constraints. Every sentence earns its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by listing return values. Covers the 10-parameter schema's key business logic (mutual exclusivity, ttl permanence). Could mention password/passwordExpiry interaction or auth requirements, but solid given schema richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds value by emphasizing the mutual exclusivity constraint between html/markdown parameters, which individual schema descriptions only hint at. Also front-loads the content type polymorphism.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Deploy), resource (HTML or Markdown content), and outcome (instant public URL). Implicit coverage distinguishes it from sibling batch_deploy (singular vs batch) and update_site (create vs modify).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the mutual exclusivity constraint ('Use either html or markdown parameter, not both'), which is critical usage guidance. Lacks explicit comparison to when batch_deploy should be used instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fork_siteAInspect
Fork/clone a public site — creates an independent copy with new ownership. The original site is unchanged. Cannot fork password-protected sites. Returns new site ID, URL, and delete token.
| Name | Required | Description | Default |
|---|---|---|---|
| ttl | No | Time-to-live in days for the fork (omit = never expires) | |
| slug | No | Custom vanity slug for the forked site | |
| siteId | Yes | The site ID to fork | |
| ogImage | No | Override Open Graph image URL (inherits from source if omitted) | |
| ogTitle | No | Override Open Graph title (inherits from source if omitted) | |
| ogDescription | No | Override Open Graph description (inherits from source if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure despite no annotations: explicitly states original site is unchanged (source safety), new copy has independent ownership, and documents return values (site ID, URL, delete token) compensating for missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: primary action with ownership implication, source safety guarantee, and limitations/returns. Front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter operation with no annotations or output schema, description successfully compensates by documenting return structure, behavioral side effects (ownership transfer), and operational constraints (password protection), providing sufficient context for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema provides 100% parameter description coverage, establishing baseline 3. Description does not add significant semantic context beyond schema (e.g., explaining TTL expiration behavior or slug constraints), but meets adequacy threshold.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Fork/clone a public site' with specific verb and resource, distinguishes from siblings like update_site (modifies existing) and deploy_html (creates new from scratch) by clarifying it creates an independent copy with new ownership.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly states limitation 'Cannot fork password-protected sites' defining when not to use. Implies use case for public site duplication vs other operations, though could more explicitly reference sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_site_analyticsAInspect
Get view analytics for a deployed site including daily views, referrer breakdown, and recent view log. Requires the delete token.
| Name | Required | Description | Default |
|---|---|---|---|
| siteId | Yes | The site ID to get analytics for | |
| deleteToken | Yes | The delete token returned when the site was created |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the unusual requirement that analytics retrieval requires a delete token, and previews the return data types (daily views, referrers, logs). However, it omits read-only status, error behaviors, rate limits, or pagination details for the view log.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first establishes the data retrieved, the second states the credential requirement. Every word earns its place; the structure is appropriately front-loaded with the value proposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no output schema, the description adequately compensates by describing the expected return data (views, referrers, logs). It covers the credential requirement. However, it lacks error handling guidance or rate limit disclosure that would be expected for a production analytics endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the deleteToken requirement but does not add syntax details, format constraints, or usage context beyond what the schema property descriptions already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get'), identifies the resource ('view analytics for a deployed site'), and details the scope ('daily views, referrer breakdown, and recent view log'). This clearly distinguishes it from sibling get_site_info, which likely returns static metadata rather than usage analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states the prerequisite ('Requires the delete token'), which gates usage, but does not explicitly compare against siblings or state when to use this versus get_site_info. The delete token requirement is noted, but no alternative approaches are mentioned if the token is unavailable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_site_infoBInspect
Get information about a deployed site (files, size, expiry)
| Name | Required | Description | Default |
|---|---|---|---|
| siteId | Yes | The site ID to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates what data categories are retrieved (files, size, expiry), but fails to disclose safety characteristics (read-only nature), error behaviors (e.g., missing site handling), or authorization requirements that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence where every element earns its place. The parenthetical list (files, size, expiry) efficiently specifies return value categories without verbosity, and there is no redundant or wasteful text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter input and lack of output schema, the description appropriately compensates by detailing what information categories are returned. While it could explicitly address error cases or caching behavior, it provides sufficient context for a straightforward metadata retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('The site ID to look up'), the schema adequately documents the parameter. The description adds minimal semantic detail beyond implying the site must be 'deployed', meeting the baseline expectation for high-coverage schemas but not providing additional formatting or example guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('deployed site') and clarifies the scope by listing what information is returned ('files, size, expiry'). However, it does not explicitly differentiate from the sibling tool 'get_site_analytics', leaving potential ambiguity about when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_site_analytics' or prerequisites for the siteId. It lacks explicit 'when-to-use' or 'when-not-to-use' instructions, leaving the agent to infer usage context solely from the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_siteAInspect
Update an existing site with new HTML or Markdown content without changing the URL. Requires the delete token from creation.
| Name | Required | Description | Default |
|---|---|---|---|
| html | No | New HTML content (use this OR markdown, not both) | |
| title | No | Page title for markdown rendering (auto-detected from first # heading if omitted) | |
| siteId | Yes | The site ID or slug to update | |
| ogImage | No | URL to Open Graph image for social media previews (1200x630px recommended) | |
| ogTitle | No | Open Graph title for social media previews (max 200 chars) | |
| markdown | No | New Markdown content — auto-rendered to styled HTML (use this OR html, not both) | |
| password | No | Set or update password protection. Empty string removes password. | |
| deleteToken | Yes | The delete token returned when the site was created | |
| ogDescription | No | Open Graph description for social media previews (max 500 chars) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses URL preservation and the unusual delete-token auth requirement. However, it omits whether content is replaced or merged, what happens to omitted optional fields, and return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence captures purpose and constraints; second states critical prerequisite. Appropriately sized for complexity given rich schema coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage covering 9 parameters, description appropriately focuses on high-level behavioral constraints. Captures the non-obvious delete-token requirement and URL persistence. Minor gap: doesn't clarify if update replaces or merges content.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description mentions 'HTML or Markdown content' which maps to parameters, but doesn't add semantic details beyond schema (e.g., no syntax guidance) or clarify the mutual exclusivity which is already in schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Update'), resource ('existing site'), and scope ('HTML or Markdown content'). The phrase 'without changing the URL' distinguishes it from deploy_html (which likely creates new sites), though it doesn't explicitly contrast with fork_site or batch operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States critical prerequisite ('Requires the delete token from creation') and constraint ('without changing the URL'), but lacks explicit when-to-use guidance versus deploy_html or batch_deploy for content updates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!