Skip to main content
Glama

Server Details

OAuth-protected Streamable HTTP MCP gateway for NoonAI DIS image and video de-identification.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 9 of 9 tools scored. Lowest: 3/5.

Server CoherenceA
Disambiguation4/5

Tools are generally distinct with clear purposes, though the relationship between the one-shot operations (`encrypt_image`, `encrypt_video`) and the granular workflow (`upload_file` → `submit_encrypt_job`) requires the agent to understand that they are alternative paths to the same outcome. The descriptions explicitly clarify this relationship, preventing serious ambiguity.

Naming Consistency5/5

Excellent consistency throughout. All tools follow the `verb_noun` snake_case convention (e.g., `upload_file`, `get_job_status`, `download_job_result`). Read operations consistently use the `get_` prefix, while actions use descriptive verbs (`submit`, `download`, `encrypt`).

Tool Count5/5

Nine tools is well-suited for this encryption service domain. The set provides appropriate granularity—offering both convenience methods (`encrypt_image`, `encrypt_video`) and atomic steps (`upload_file`, `submit_encrypt_job`)—while covering the full job lifecycle (submission, polling, metadata retrieval, download) without bloat.

Completeness4/5

Covers the core async encryption workflow comprehensively: upload, job submission, status polling, result metadata retrieval, and file download. Minor gaps exist for job lifecycle management, such as canceling pending jobs or listing/filtering jobs beyond the basic `get_usage_summary` ULID list, but these are not critical for the primary use case.

Available Tools

13 tools
download_job_resultAInspect

Download one completed encrypt result file through the gateway.

This uses legacy was-util POST /result/url to obtain a signed URL, then fetches the file server-side so MCP clients do not depend on environment-specific external MinIO reachability. By default the response includes base64 when the file is small enough and always stores a local copy under data/downloads/<job_id>/ unless save_to_path is provided.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYes
file_nameNo
file_indexNo
save_to_pathNo
include_base64No
inline_base64_limit_bytesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden excellently. It reveals the legacy internal API usage (was-util POST /result/url), the server-side fetching mechanism, local file system side effects (storage under data/downloads/<job_id>/), and base64 inclusion logic (size-dependent defaults).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three information-dense sentences with zero waste: sentence 1 defines purpose, sentence 2 explains technical implementation and rationale, sentence 3 details default behaviors. Every clause provides actionable information about behavior or requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, file operations, base64 encoding) and lack of schema documentation, the description successfully covers critical behavioral aspects (side effects, storage paths, encoding logic) and leverages the existence of an output schema (so return values need not be described). Minor gap in file selection parameters (file_name, file_index) prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by mentioning save_to_path (local storage override) and describing the base64 behavior (mapping to include_base64 and inline_base64_limit_bytes). However, it fails to explain file_name and file_index parameters, leaving half the parameter set undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool downloads 'one completed encrypt result file' using the specific verb 'download' and identifies the resource as an encrypt result, distinguishing it from sibling get_job_result (metadata) and submit_encrypt_job (creation). The context within the encrypt job workflow is evident.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides strong usage context by explaining the architectural rationale (gateway proxy for MinIO reachability issues) and prerequisite (job must be 'completed'). However, it stops short of explicitly naming alternatives or stating when NOT to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

encrypt_imageAInspect

One-shot encrypt: upload to NAS then submit the encrypt job (same as upload_file + submit_encrypt_job in sequence).

ParametersJSON Schema
NameRequiredDescriptionDefault
key_nameYes
file_nameYes
file_pathYes
file_typeNo
key_indexYes
restorationNofalse
target_objectsNo
encrypt_object_labelNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the composite nature (sequential operations), the NAS upload side effect, and the job-based async pattern. However, it omits idempotency guarantees, error handling behavior (what happens if upload succeeds but job submission fails), and permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently conveys the tool's purpose, behavior, and relationship to siblings. Every clause earns its place: the 'one-shot' prefix establishes the value proposition, the middle describes the action, and the parenthetical provides sibling context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters with zero schema documentation and no annotations, the description is incomplete. While the output schema reduces the need to describe return values, the lack of parameter documentation for required fields (key_name vs key_index semantics) and optional fields (restoration, target_objects) leaves critical gaps for tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% across 8 parameters, requiring significant compensation from the description. While the description implies file_path/file_name relate to the upload and key_index/key_name relate to encryption, it provides no guidance on the cryptic optional parameters (restoration, target_objects, encrypt_object_label) or distinctions between path and name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the composite action (upload to NAS + submit encrypt job) and explicitly distinguishes this tool from siblings by noting it performs ``upload_file`` + ``submit_encrypt_job`` in sequence. Specific verbs and resources (NAS, encrypt job) are identified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description establishes this as a convenience wrapper for two sequential operations, implying when to use it (when both steps are needed). However, it lacks explicit guidance on when to prefer the individual steps over this one-shot approach, or prerequisites for the file/key parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

encrypt_videoBInspect

One-shot video encrypt with SaaS video policy validation.

Supported target objects: body, face, car.

ParametersJSON Schema
NameRequiredDescriptionDefault
key_nameYes
file_nameYes
file_pathYes
file_typeNovideo/mp4
key_indexYes
restorationNofalse
target_objectsNo
encrypt_object_labelNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully mentions the 'SaaS video policy validation' constraint and target object detection, but fails to clarify critical behaviors such as whether the encryption is reversible (despite a 'restoration' parameter), error handling, or whether this overwrites the original file.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently front-loaded with two high-density sentences. There is no redundant or wasted language; the first sentence establishes the operation and validation policy, while the second specifies the target object constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 undocumented parameters (0% schema coverage), no annotations, and a complex encryption operation, the description is insufficient. While an output schema exists (reducing the need for return value documentation), the agent lacks critical context for required parameters like 'key_name' and 'key_index', and behavioral nuances necessary for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage across 8 parameters. The description only implicitly documents the 'target_objects' parameter by listing supported values (body, face, car). It completely fails to explain the other 7 parameters including 'key_index', 'key_name', 'restoration', or 'encrypt_object_label', leaving the agent without guidance on encryption keys or restoration behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the core action ('video encrypt') and distinguishes it from the sibling 'encrypt_image' tool. It adds specific context about 'SaaS video policy validation' and lists supported target objects (body, face, car), giving the agent a clear understanding of the tool's scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the term 'One-shot' implicitly hints at a synchronous operation potentially contrasting with the 'submit_encrypt_job' sibling, the description fails to explicitly state when to use this tool versus the async job submission alternative. No explicit 'when-not' guidance or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_billing_statusAInspect

Return current flat-price billing status for the caller's stable billing principal.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds 'caller's' implying authentication scope and 'flat-price' clarifying billing model. However, fails to disclose read-only nature, side effects, or rate limiting that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence (11 words) with zero waste. Front-loaded with action verb 'Return'. Every element earns its place: 'current' (temporal), 'flat-price' (billing type), 'caller's stable billing principal' (scope).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for zero-parameter tool with output schema present. Explains what is returned sufficiently without needing to detail return structure. Minor gap: lacks read-only safety indication given absent annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters with 100% schema coverage (trivially). Baseline 4 appropriate as no parameter documentation is required. Description focuses on behavior/return rather than input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Return' with clear resource 'flat-price billing status'. Distinguishes from siblings like get_usage_summary (usage-based vs flat-price) and get_pricing_policy (policy vs status) through specificity. 'Stable billing principal' adds scope clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. With siblings get_usage_summary and get_pricing_policy covering related billing concerns, the description should clarify which billing aspect this covers, though 'flat-price' provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_encrypt_constraintsBInspect

Return SaaS-side upload and target-selection policy for image/video encryption.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_kindNovideo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It fails to specify what the policy contains (file size limits, formats, regions), whether results are cacheable, or rate limits. 'SaaS-side' hints at server constraints but lacks specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, nine words, front-loaded with the action verb. No redundancy or waste. Every word earns its place by conveying the tool's specific function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimal viable description for a single-parameter tool where output schema exists (reducing need to describe return values). However, gaps remain: valid values for file_kind are not enumerated, and workflow timing (call before submit_encrypt_job) is absent. Adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, requiring the description to compensate. The description mentions 'image/video' which loosely maps to the file_kind parameter's domain, implying valid values. However, it does not explicitly document that file_kind accepts 'image' or 'video' or explain the default 'video' behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Return') and identifies the resource ('SaaS-side upload and target-selection policy'). It distinguishes from sibling tools like encrypt_image/video (which perform encryption) by specifying it retrieves constraints rather than executing encryption, though it could explicitly clarify this is a prerequisite check rather than an action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, or its position in the workflow. Given siblings include submit_encrypt_job and upload_file, the description should state to call this before submitting jobs or uploading to validate constraints, but it does not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_job_resultAInspect

Fetch encrypt result metadata via legacy was-encrypt GET /encrypt/result/:id.

Call when get_job_status reports completion (or to surface legacy 404 if not ready).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the 'legacy' system context and the 404 behavior when called prematurely, but lacks other critical behavioral details like authentication requirements, rate limits, or the structure of the returned metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly concise with two sentences: the first front-loaded with the core purpose and endpoint context, the second providing precise invocation timing. No redundant words or structural waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete given the presence of an output schema (which handles return value documentation) and the tool's focused scope. It adequately situates the tool within the encrypt job workflow alongside its siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the 'job_id' parameter. While the description mentions the endpoint path 'GET /encrypt/result/:id', it fails to explicitly document what job_id is, its format, or that it originates from 'submit_encrypt_job', leaving a significant documentation gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Fetch[es] encrypt result metadata' with a specific verb and resource, and distinguishes itself from sibling 'download_job_result' by specifying 'metadata' rather than the actual file content. However, it could better define what 'metadata' encompasses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance: explicitly states to 'Call when ``get_job_status`` reports completion', directly referencing the sibling polling tool, and warns about the 'legacy 404 if not ready' error case, giving clear temporal sequencing for the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_job_statusAInspect

Poll encrypt job progress via legacy was-util GET /progress/encrypt.

job_id is the ULID returned by submit_encrypt_job (maps to enc_request_list.id).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description compensates well by disclosing this is a 'legacy was-util' endpoint using 'GET /progress/encrypt'. The 'GET' and 'poll' language implies read-only behavior safe for repeated calls. Could improve by mentioning rate limits or what progress format is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence front-loads purpose and endpoint; second sentence details the critical parameter. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a single-parameter polling tool where output schema exists (so return values needn't be described). Covers purpose, endpoint, and parameter semantics. Minor gap in explicitly contrasting with get_job_result sibling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema coverage. Describes job_id as a ULID (format hint), specifies it originates from submit_encrypt_job (source/provenance), and maps it to internal database field enc_request_list.id (implementation context). Fully documents the single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool polls encrypt job progress with specific verb and resource. Mentions 'encrypt' which distinguishes from potential other job types in the sibling set. However, it doesn't explicitly differentiate from sibling 'get_job_result' (status vs final result), leaving minor ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies workflow by stating job_id comes from 'submit_encrypt_job', suggesting this is used after job submission. However, lacks explicit guidance on when to use this vs 'get_job_result' or polling frequency recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricing_policyAInspect

Return the flat MCP billing policy for image/video requests.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. The verb 'Return' implies a read-only, safe operation, but the description lacks details about rate limiting, caching behavior, authentication requirements, or whether 'flat' refers to a fixed rate structure. It meets minimum expectations by implying idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence with no redundancy. The most critical information (the resource retrieved and its domain scope) appears first, making it appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and an output schema exists (mentioned in context signals), the description does not need to explain return values. The mention of 'image/video' appropriately contextualizes it within the sibling tool suite of encryption services. However, it could strengthen completeness by explicitly linking to the encrypt job workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters. According to the evaluation rubric, zero-parameter tools receive a baseline score of 4. The description appropriately does not invent parameters where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Return[s] the flat MCP billing policy' with specific scope 'for image/video requests.' The verb 'Return' and resource 'billing policy' are explicit. However, it does not explicitly differentiate from siblings like 'get_billing_status' or 'get_encrypt_constraints' (which returns technical limits vs pricing), though the 'policy' naming helps distinguish it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to invoke this tool versus alternatives, nor does it mention prerequisites like authentication requirements. Given siblings include 'purchase_credits' and 'submit_encrypt_job', guidance such as 'Use before submitting jobs to estimate costs' would be expected but is absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usage_summaryAInspect

List registered public job ULIDs (ENCRYPT_JOB rows in public_id_map).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable technical context by specifying the data source ('public_id_map' table, 'ENCRYPT_JOB' rows), but fails to disclose safety characteristics (read-only, idempotent), pagination behavior, or performance constraints that would help the agent plan invocations safely.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence that front-loads the action ('List registered public job ULIDs') before adding technical parenthetical context. Every word earns its place; there is no redundant elaboration or marketing language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (removing the need to describe return values), zero parameters (removing input documentation burden), and the tool's narrow scope, the description is sufficiently complete. The specification of the source table ('public_id_map') provides necessary domain context, though mentioning read-only safety would have improved it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately does not invent parameter semantics, though it effectively implies the operation requires no filtering inputs by stating it lists all registered ULIDs from the specified table.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and resource ('registered public job ULIDs'), clarifying the tool enumerates job identifiers rather than retrieving details of a specific job. It distinguishes from siblings like get_job_status or get_job_result by specifying it returns 'ENCRYPT_JOB rows,' though the tool name 'get_usage_summary' slightly conflicts with the description's focus on listing IDs rather than summarizing usage metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like get_job_status or get_job_result. It lacks statements about prerequisites (e.g., 'use this to enumerate jobs before retrieving specific results') or exclusion criteria, leaving the agent to infer usage context from the parameterless schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

purchase_creditsCInspect

Charge the caller's stored payment method and credit the legacy tenant balance.

ParametersJSON Schema
NameRequiredDescriptionDefault
creditsYes
descriptionNoDIS MCP credit top-up
idempotency_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the financial mutation ('Charge') and destination ('legacy tenant balance'), but lacks critical behavioral details: it doesn't explain the idempotency pattern (despite having an idempotency_key parameter), doesn't disclose failure modes (insufficient funds, invalid payment method), and doesn't mention what the output schema contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single 11-word sentence that is front-loaded with the action. While concise, it is likely underspecified for a financial mutation operation with zero schema documentation, suggesting brevity comes at the cost of necessary behavioral context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a financial write operation with 3 parameters, 0% schema coverage, and no annotations, the description provides insufficient context. It omits the purpose of the 'idempotency_key' (essential for financial safety), doesn't explain what 'credits' represent, and doesn't reference the output schema structure despite available return data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. While it mentions 'credits' conceptually, it fails to document the semantics of the 'credits' parameter (units, constraints), the 'idempotency_key' (critical for financial retries), or the 'description' parameter purpose. The description does not bridge the gap left by the undescribed schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Charge', 'credit') and identifies the resources ('stored payment method', 'legacy tenant balance'). It distinguishes from sibling read operations (get_billing_status) and setup operations (register_billing_payment_method) by describing the actual transaction flow, though it doesn't explicitly state this is for purchasing/adding credits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, nor prerequisites mentioned. Given the sibling 'register_billing_payment_method', the description should clarify that a payment method must be registered first, but it omits this critical prerequisite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_billing_payment_methodBInspect

Encrypt and store a default payment method for the caller's billing principal.

ParametersJSON Schema
NameRequiredDescriptionDefault
buyer_idYes
card_cvcNo
card_ssnNo
buyer_nameYes
buyer_emailYes
card_numberYes
payment_typeNoLOCAL02
card_expiry_ymYes
card_password_2digitsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the burden of disclosing behavior. It successfully indicates encryption handling and that this sets a 'default' payment method. However, it omits critical mutation details: whether this overwrites an existing default, validation rules for card data, or compliance/PCI implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with action verbs front-loaded. However, given the high parameter count and zero schema documentation, this level of brevity is insufficient rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial mutation tool with 9 undocumented parameters, no annotations, and sensitive data handling, the description is incomplete. It lacks parameter explanations, error conditions, and behavioral edge cases (overwrites vs. rejects duplicates) necessary for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage across 9 parameters, the description fails to compensate. It does not explain the relationship between buyer_id/billing principal, expected formats (card_expiry_ym), payment_type options, or the purpose of optional sensitive fields (card_cvc, card_ssn, card_password_2digits).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides specific verbs ('Encrypt and store'), identifies the resource ('default payment method'), and scopes it to the 'caller's billing principal.' This clearly distinguishes it from sibling media-encryption tools (encrypt_image, encrypt_video) and billing queries (get_billing_status), establishing a distinct purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like purchase_credits, nor does it mention prerequisites (e.g., existing billing principal requirements) or conditions where registration might fail.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_encrypt_jobBInspect

Submit an encrypt job: syncTime/request/encrypt/sendMessage/encrypt.

Requires a valid asset_id from upload_file. When that asset id exists in the gateway registry, the uploaded metadata (stored filename, width/height, checksum, file size) is treated as authoritative and mismatched caller values are rejected.

source_file_path is optional for split flow. It is only used when a caller wants the gateway to re-read the local file size instead of reusing the persisted upload metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
widthNo
heightNo
asset_idYes
checksumNo
key_nameYes
file_nameNo
file_typeNo
key_indexYes
restorationNofalse
target_objectsNo
source_file_pathNo
encrypt_object_labelNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses important behavioral details: the gateway registry treats uploaded metadata as authoritative and rejects mismatched caller values, and explains the specific conditions for using source_file_path. However, it omits behavior for other parameters like restoration or target_objects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three focused paragraphs. The workflow notation in the first sentence is information-dense, and the subsequent sentences provide specific behavioral details without redundancy. Every sentence adds value beyond the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (12 parameters, prerequisite workflow) and zero schema coverage, the description adequately covers the core submission flow and critical asset_id validation logic. However, it is incomplete regarding the majority of parameters, including required fields, which is a significant gap for a tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate significantly. While it provides detailed semantics for asset_id and source_file_path (including the split flow concept), it fails to document the other required parameters (key_name, key_index) or explain the purpose of restoration, target_objects, or encrypt_object_label, leaving most of the 12 parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool submits an encrypt job and uses a workflow diagram (syncTime → /request/encrypt → /sendMessage/encrypt) to distinguish this asynchronous, multi-step process from direct encryption siblings like encrypt_image. It also specifies the upload_file prerequisite, further differentiating its use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear prerequisites (requires valid asset_id from upload_file) and mentions a specific usage pattern ('split flow' for source_file_path). However, it lacks explicit guidance on when to choose this job-based tool over the direct encrypt_image or encrypt_video alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_fileAInspect

Upload a file to legacy NAS via was-util /uploadNAS (port 5000).

Returns an asset_id (ULID) registered in public_id_map for use with submit_encrypt_job. Heavy uploads are isolated from job submission.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_nameYes
file_pathYes
file_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: the specific endpoint/port usage, the ULID format of returned asset_ids, registration in `public_id_map`, and the performance characteristic of isolating heavy uploads from job submission. Minor gap: does not mention idempotency, overwrite behavior, or size constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first establishes the action and endpoint, second explains the return value and downstream usage, third provides architectural context. Front-loaded with the essential operation and appropriately dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a workflow-step tool with output schema (which handles return value documentation), but incomplete due to missing parameter explanations critical for a 3-parameter upload operation with zero schema coverage. Missing error handling and constraint documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage (all three parameters undocumented in schema), the description fails to explain parameter semantics. It does not clarify whether `file_path` is local source or remote destination, how `file_name` relates to the path, or the purpose of `file_type` (format validation vs metadata). Significant gap for a file upload tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (upload), resource (file), destination (legacy NAS via was-util `/uploadNAS` port 5000), and return value (asset_id). It explicitly distinguishes this from sibling tools by linking it to `submit_encrypt_job` and contrasting the isolation of heavy uploads from job submission.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear workflow context by stating the returned asset_id is 'for use with `submit_encrypt_job`', establishing the tool's role in the encryption pipeline. It explains the architectural rationale (isolation of heavy uploads) but does not explicitly state when NOT to use this versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources