artidrop
Server Details
The publishing layer for AI agents. Turn HTML and Markdown into shareable URLs instantly via MCP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 3 of 3 tools scored. Lowest: 2.9/5.
Each tool targets a distinct phase of the artifact lifecycle: 'publish' creates content, 'get' retrieves current state/content, and 'versions' queries history. No functional overlap exists between the three operations.
While all names are lowercase, the pattern is mixed: 'get' and 'publish' use imperative verbs (with implied objects), while 'versions' uses a plural noun. This breaks the verb_noun convention and makes 'versions' appear as a resource rather than an action like the others.
Three tools is slightly minimal for an artifact management service but acceptable for a focused 'publish and retrieve' scope. It sits just above the 'thin' threshold, providing core functionality without bloat, though additional lifecycle tools would be expected for production use.
Significant gaps prevent full artifact lifecycle management: there is no 'list' or 'search' to discover artifacts (forcing agents to rely on external ID storage), no 'delete' or 'unpublish' for cleanup, and no way to retrieve specific historical versions despite being able to list them. Agents will hit dead ends when trying to manage or clean up artifacts.
Available Tools
4 toolsgetAInspect
Get artifact metadata. Set include_content to true to also return the full HTML content.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Artifact ID or short ID | |
| include_content | No | Include the full HTML content in the response (default: false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It explains that HTML content is optionally returned, but omits safety profile, error handling (e.g., missing ID behavior), idempotency, or rate limiting details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Both sentences earn their place: the first defines scope (metadata retrieval), the second explains the critical behavioral toggle (include_content). Zero redundancy in 16 words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter retrieval tool without output schema, the description adequately covers the return values (metadata/HTML). Minor gap in not describing error states or null returns, but sufficient for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable usage context by explicitly instructing to 'Set include_content to true' for full HTML retrieval, reinforcing the parameter's practical application.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'artifact metadata' using the specific verb 'Get' and distinguishes itself from siblings 'publish' (mutation) and 'versions' (listing) through its singular retrieval focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit guidance by explaining the include_content flag's effect, but lacks explicit guidance on when to use this tool versus siblings (e.g., when to prefer this over 'versions' for artifact inspection).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publishAInspect
Publish HTML or Markdown content as a shareable URL. For Markdown, optionally include images as base64-encoded data.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Artifact title (default: Untitled) | |
| format | Yes | Content format | |
| images | No | Images referenced in the markdown content. Only used when format is 'markdown'. | |
| content | Yes | The HTML or Markdown content to publish | |
| visibility | No | Visibility (default: unlisted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the output ('shareable URL') but fails to disclose critical mutation traits: persistence duration, whether publishing is permanent, if updates are allowed, or visibility implications of 'public' vs 'unlisted'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with zero waste. Every word earns its place: 'Publish' (action), 'HTML or Markdown' (input types), 'shareable URL' (output value). No filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter mutation tool with no output schema, the description adequately covers the core output ('shareable URL') but omits behavioral context about the visibility parameter's security implications and what the publishing operation returns. Sufficient but minimal for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the relationship between content and format parameters by mentioning 'HTML or Markdown' but adds no syntax details, examples, or constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Publish') with clear resource ('HTML or Markdown content') and output ('shareable URL'). It effectively distinguishes from siblings 'get' (retrieve) and 'versions' (history) by emphasizing the creation of a shareable artifact.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool does but provides no explicit guidance on when to use it versus alternatives, prerequisites for publishing, or when to choose 'public' vs 'unlisted' visibility. Usage is implied by the verb but not contextualized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_siteAInspect
Publish a multi-file HTML site from a base64-encoded ZIP file. The ZIP must contain an index.html at its root. For sites larger than ~10MB, prefer the REST API /v1/artifacts/upload endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Site title (default: Untitled) | |
| visibility | No | Visibility (default: unlisted) | |
| zip_base64 | Yes | Base64-encoded ZIP file containing the site files |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden. It effectively communicates operational constraints: the ~10MB size limit and the mandatory index.html requirement ('ZIP must contain an index.html at its root'). It lacks disclosure of side effects, idempotency, or return value format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action definition, file structure constraint, and size limit/alternative. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (file upload), complete parameter coverage, and lack of annotations/output schema, the description adequately covers input constraints and usage thresholds. A minor gap is the absence of return value description, though the publishing action is self-evident.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), the description adds significant semantic value by specifying the internal structure requirement of the ZIP parameter (index.html at root) and the size constraint (~10MB), which are not captured in the schema's basic type descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb (Publish), resource (multi-file HTML site), and input format (base64-encoded ZIP file), clearly distinguishing it from the generic sibling 'publish' tool by specifying the HTML site use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when NOT to use the tool ('For sites larger than ~10MB') and names the specific alternative ('prefer the REST API /v1/artifacts/upload endpoint'), providing clear guidance on tool selection boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
versionsCInspect
List version history of an artifact
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Artifact ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fails to disclose read-only nature, pagination behavior, or what version metadata is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (5 words) and front-loaded; every word earns its place though brevity sacrifices behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple single-parameter tool but lacks output specification critical given no output_schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Artifact ID'), establishing baseline; description adds no parameter context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and resource 'version history' distinctly separates it from sibling 'get' (retrieve) and 'publish' (create), though 'artifact' lacks domain specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives (e.g., whether to use 'get' instead for current version only) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!