Skip to main content
Glama

Server Details

The publishing layer for AI agents. Turn HTML and Markdown into shareable URLs instantly via MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 3 of 3 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct phase of the artifact lifecycle: 'publish' creates content, 'get' retrieves current state/content, and 'versions' queries history. No functional overlap exists between the three operations.

Naming Consistency3/5

While all names are lowercase, the pattern is mixed: 'get' and 'publish' use imperative verbs (with implied objects), while 'versions' uses a plural noun. This breaks the verb_noun convention and makes 'versions' appear as a resource rather than an action like the others.

Tool Count4/5

Three tools is slightly minimal for an artifact management service but acceptable for a focused 'publish and retrieve' scope. It sits just above the 'thin' threshold, providing core functionality without bloat, though additional lifecycle tools would be expected for production use.

Completeness2/5

Significant gaps prevent full artifact lifecycle management: there is no 'list' or 'search' to discover artifacts (forcing agents to rely on external ID storage), no 'delete' or 'unpublish' for cleanup, and no way to retrieve specific historical versions despite being able to list them. Agents will hit dead ends when trying to manage or clean up artifacts.

Available Tools

4 tools
getAInspect

Get artifact metadata. Set include_content to true to also return the full HTML content.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesArtifact ID or short ID
include_contentNoInclude the full HTML content in the response (default: false)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It explains that HTML content is optionally returned, but omits safety profile, error handling (e.g., missing ID behavior), idempotency, or rate limiting details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Both sentences earn their place: the first defines scope (metadata retrieval), the second explains the critical behavioral toggle (include_content). Zero redundancy in 16 words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter retrieval tool without output schema, the description adequately covers the return values (metadata/HTML). Minor gap in not describing error states or null returns, but sufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds valuable usage context by explicitly instructing to 'Set include_content to true' for full HTML retrieval, reinforcing the parameter's practical application.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'artifact metadata' using the specific verb 'Get' and distinguishes itself from siblings 'publish' (mutation) and 'versions' (listing) through its singular retrieval focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit guidance by explaining the include_content flag's effect, but lacks explicit guidance on when to use this tool versus siblings (e.g., when to prefer this over 'versions' for artifact inspection).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

publishAInspect

Publish HTML or Markdown content as a shareable URL. For Markdown, optionally include images as base64-encoded data.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoArtifact title (default: Untitled)
formatYesContent format
imagesNoImages referenced in the markdown content. Only used when format is 'markdown'.
contentYesThe HTML or Markdown content to publish
visibilityNoVisibility (default: unlisted)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the output ('shareable URL') but fails to disclose critical mutation traits: persistence duration, whether publishing is permanent, if updates are allowed, or visibility implications of 'public' vs 'unlisted'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste. Every word earns its place: 'Publish' (action), 'HTML or Markdown' (input types), 'shareable URL' (output value). No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter mutation tool with no output schema, the description adequately covers the core output ('shareable URL') but omits behavioral context about the visibility parameter's security implications and what the publishing operation returns. Sufficient but minimal for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the relationship between content and format parameters by mentioning 'HTML or Markdown' but adds no syntax details, examples, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Publish') with clear resource ('HTML or Markdown content') and output ('shareable URL'). It effectively distinguishes from siblings 'get' (retrieve) and 'versions' (history) by emphasizing the creation of a shareable artifact.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but provides no explicit guidance on when to use it versus alternatives, prerequisites for publishing, or when to choose 'public' vs 'unlisted' visibility. Usage is implied by the verb but not contextualized.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

publish_siteAInspect

Publish a multi-file HTML site from a base64-encoded ZIP file. The ZIP must contain an index.html at its root. For sites larger than ~10MB, prefer the REST API /v1/artifacts/upload endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoSite title (default: Untitled)
visibilityNoVisibility (default: unlisted)
zip_base64YesBase64-encoded ZIP file containing the site files
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full disclosure burden. It effectively communicates operational constraints: the ~10MB size limit and the mandatory index.html requirement ('ZIP must contain an index.html at its root'). It lacks disclosure of side effects, idempotency, or return value format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: action definition, file structure constraint, and size limit/alternative. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (file upload), complete parameter coverage, and lack of annotations/output schema, the description adequately covers input constraints and usage thresholds. A minor gap is the absence of return value description, though the publishing action is self-evident.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), the description adds significant semantic value by specifying the internal structure requirement of the ZIP parameter (index.html at root) and the size constraint (~10MB), which are not captured in the schema's basic type descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Publish), resource (multi-file HTML site), and input format (base64-encoded ZIP file), clearly distinguishing it from the generic sibling 'publish' tool by specifying the HTML site use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when NOT to use the tool ('For sites larger than ~10MB') and names the specific alternative ('prefer the REST API /v1/artifacts/upload endpoint'), providing clear guidance on tool selection boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

versionsCInspect

List version history of an artifact

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesArtifact ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description fails to disclose read-only nature, pagination behavior, or what version metadata is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (5 words) and front-loaded; every word earns its place though brevity sacrifices behavioral details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple single-parameter tool but lacks output specification critical given no output_schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Artifact ID'), establishing baseline; description adds no parameter context beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and resource 'version history' distinctly separates it from sibling 'get' (retrieve) and 'publish' (create), though 'artifact' lacks domain specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus alternatives (e.g., whether to use 'get' instead for current version only) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources