Caliper
Server Details
Geometry and CAD file metadata extraction for STL, OBJ, PLY, PCD, LAS/LAZ, glTF/GLB.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsfeature.requestRequest a FeatureAInspect
Request a feature or format that Caliper doesn't support yet.
Free, no payment required. Use this when you need a capability that
Caliper doesn't currently offer — especially unsupported file formats
(STEP, IGES) or analysis features not yet available. Requests are
logged and used to prioritize development.
Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| description | Yes | A short description of the feature or format you need. Examples: 'STEP file support', 'volume comparison between two files', 'export to USD format'. This helps prioritize development. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context absent from annotations: zero-cost ('Free, no payment required'), persistence model ('logged and used to prioritize development'), and privacy compliance. Properly aligned with readOnlyHint:false (writes a request) and destructiveHint:false (non-destructive).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences each earning their place: purpose statement, cost disclosure, usage guidance with sibling differentiation, and behavioral outcome. Front-loaded with clear action verb. Privacy link is appropriately placed at end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter submission tool without output schema, the description adequately covers purpose, cost, usage triggers, data handling, and legal compliance. No gaps requiring additional disclosure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with detailed examples ('STEP file support', 'volume comparison'). Tool description provides no additional parameter semantics beyond the schema, which is acceptable baseline given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Request' plus clear resource 'feature or format'. Explicitly distinguishes from sibling format.* tools by specifying it handles capabilities Caliper 'doesn't support yet', implying the converters handle currently-supported formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit directive 'Use this when you need a capability that Caliper doesn't currently offer' with concrete examples of unsupported formats (STEP, IGES) that contrast with the supported formats handled by sibling conversion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.autoAuto-Detect and AnalyzeARead-onlyIdempotentInspect
Auto-detect geometry file format and extract metadata statistics.
Accepts a 3D geometry file via URL or base64 and returns structured
metadata: bounding boxes, triangle counts, manifold analysis, point
cloud statistics, and more.
Supported formats: STL, OBJ, PLY, PCD, LAS/LAZ, glTF/GLB.
STEP and IGES support is planned.
Provide either file_url (preferred for large files) or file_b64
(for files under 200KB). Include filename for format detection if
using file_b64.
Files under 150KB are free. Larger files require payment via x402
(USDC on Base). If payment is required, the response includes payment
details. Retry with the payment argument set to a JSON string
containing "transaction", "network", and "priceToken".
Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent coverage beyond annotations: discloses the cost model ('Files under 150KB are free'), payment mechanism ('x402 USDC on Base'), retry logic for the payment parameter, external dependencies (HTTP/HTTPS URLs), privacy policy URL, and previews the output structure ('bounding boxes, triangle counts, manifold analysis').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: core purpose first, input methods second, format support third, pricing/workflow fourth. Each sentence delivers essential information; nothing is wasted despite covering multiple complex concerns (payment, multiple input methods, supported formats).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive despite having no output schema. The description enumerates expected return values ('bounding boxes, triangle counts...'), documents the payment barrier (a major behavioral constraint), and lists supported/planned formats. Combined with strong annotations, this is fully sufficient for an agent to predict outcomes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds crucial semantic context: size constraints (200KB limit for b64), the requirement for filename when using base64 ('for format detection'), and the specific retry workflow for the payment parameter. This significantly aids correct invocation beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource statement ('Auto-detect geometry file format and extract metadata statistics') that clearly defines the tool's scope. The 'auto-detect' qualifier effectively distinguishes it from the specific format.* siblings (format.stl, format.obj, etc.) that likely handle known formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use file_url versus file_b64 ('preferred for large files' vs 'for files under 200KB') and documents the retry workflow for payment. Deduct one point because it does not explicitly differentiate this tool from the sibling 'format.detect' despite the similar naming, leaving potential ambiguity about when to use which.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.batchBatch AnalysisARead-onlyIdempotentInspect
Analyze multiple geometry files in a single batch request.
Submit up to 10 files, receive a single quote, pay once, and get
structured metadata for all files. Supports mixed formats.
Payment is required (USDC on Base via x402). If no payment is provided,
the response includes the total price and per-file breakdown. Retry
with the payment argument containing "transaction", "network", and
"priceToken".
Partial success: if some files fail processing, you still receive
results for the files that succeeded.
Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | JSON array of file descriptors. Each object has optional keys: "file_url" (HTTP/S URL), "file_b64" (base64 string), "filename" (for format detection). Provide either file_url or file_b64 per file. Max 10 files per batch. | |
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint, idempotentHint, and openWorldHint. The description adds crucial behavioral context not in annotations: the two-phase payment flow (quote then pay), partial success handling ('if some files fail processing, you still receive results'), and the x402 payment specifics (USDC on Base). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with logical flow: capability → payment requirement → retry instructions → error handling → privacy. Each sentence earns its place, though the payment explanation requires multiple sentences due to the complexity of the x402 flow. No redundant or tautological language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by explaining response variations (price breakdown vs. results) and partial success scenarios. It covers the essential complexity (payment, batching, mixed formats) sufficiently for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by contextualizing the payment parameter within a retry workflow and explaining the business logic (single quote for all files). It reinforces the files parameter constraints (max 10, mixed formats) that appear in the schema but emphasizes their operational significance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Analyze[s] multiple geometry files in a single batch request' with specific scope (up to 10 files, mixed formats). It effectively distinguishes itself from single-format siblings (format.obj, format.gltf, etc.) by emphasizing batch processing and mixed format support.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear workflow guidance for the payment retry mechanism ('If no payment is provided... Retry with...') and explains partial success behavior. While it implies usage for multiple files via 'mixed formats,' it does not explicitly contrast with single-file alternatives or state when to prefer specific format tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.detectDetect File FormatARead-onlyIdempotentInspect
Detect the geometry file format from a filename or URL.
Returns the detected format name and whether it is currently supported.
Use this to check format support before making a paid analysis call.
No payment required. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| filename | Yes | Filename or URL to check. The format is detected from the file extension. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly/idempotent), but description adds valuable context: return values ('format name and whether...supported'), cost structure ('No payment required'), and privacy policy link. Lacks rate limit or error case details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each earning its place: purpose, return values, usage guidance, cost/privacy. Front-loaded with core action, zero redundancy, appropriate density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by describing return values. Simple single-parameter tool with strong annotations. Minor gap: doesn't specify behavior for unsupported/missing extensions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage. Description mentions 'filename or URL' matching the schema, but adds domain context ('geometry') not explicit in schema. Baseline 3 appropriate per rubric for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb-resource pair ('Detect the geometry file format') clearly identifies the action and domain. Effectively distinguishes from sibling format.* tools by focusing on detection rather than conversion/analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('before making a paid analysis call') and cost implications ('No payment required'), providing clear workflow guidance against expensive alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.gltfglTF/GLB AnalysisARead-onlyIdempotentInspect
Extract metadata from a glTF or GLB file.
Returns asset info, scene graph structure, mesh/material/texture
counts, vertex and index totals, feature flags (normals, tangents,
texcoords, colors, joints), primitive modes, and extensions.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent status, but description adds critical behavioral context not present in structured data: payment mechanism (x402/USDC on Base), privacy policy link, and detailed enumeration of extracted metadata fields that compensates for missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two compact paragraphs: first establishes extraction purpose and return payload structure, second covers payment/legal requirements. No redundancy; every clause delivers distinct information about functionality, output, or operational constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking output schema, description comprehensively enumerates return values (asset info, scene graph, counts, feature flags, extensions). Payment requirement disclosure is essential for this endpoint. Minor gap: no mention of error conditions or file size constraints beyond what's in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (file_b64, file_url, filename, payment all fully documented), so baseline 3 applies. Description provides minimal parameter-specific guidance beyond referencing format.auto for payment flow context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Extract metadata') and resource ('glTF or GLB file'), then distinguishes scope via detailed list of extracted data (scene graph, mesh counts, feature flags). Format-specificity clearly differentiates it from sibling tools like format.obj or format.las.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly references sibling tool format.auto for 'payment flow details,' guiding the agent to the correct prerequisite step. However, lacks explicit guidance on when to choose this over format.detect or other format-specific tools in the suite.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.lasLAS/LAZ AnalysisARead-onlyIdempotentInspect
Extract metadata from a LAS or LAZ point cloud file.
Returns LAS version, point format, point count, scale factors,
offsets, bounding box, classification counts, feature flags
(RGB, intensity, GPS time, waveform), and VLR information.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety, while the description adds critical behavioral context not in structured fields: the x402 payment requirement (USDC on Base), privacy policy implications, and detailed enumeration of return values (classification counts, feature flags, VLR info) to compensate for missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured and front-loaded: core purpose (sentence 1), return value specification (sentence 2), payment requirement and prerequisite tool reference (sentence 3), and compliance link (sentence 4). No redundant or filler content; every sentence provides essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description comprehensively lists return fields (version, point format, bounding box, etc.). It adequately covers the payment complexity and references the privacy policy. Minor gap: could explicitly mention the payment_required error condition, though this is implied in the payment parameter description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already comprehensively document all four parameters (payment, file_b64, file_url, filename), including when to use base64 vs URL and the retry context for payment. The description adds the USDC on Base currency detail but largely relies on the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Extract') and resource ('metadata from a LAS or LAZ point cloud file'), clearly distinguishing it from sibling format.* tools that handle different file types (GLTF, OBJ, etc.) or different operations (format.auto for payment flow, format.batch for batch processing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly references format.auto for payment flow details, establishing a clear prerequisite workflow (use format.auto first, then this tool). However, it does not explicitly differentiate from format.batch for multi-file scenarios or when to prefer file_url vs file_b64, though schema descriptions partially cover this.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.objOBJ AnalysisARead-onlyIdempotentInspect
Extract metadata from an OBJ file.
Returns vertex/normal/texcoord/face counts, triangle/quad/polygon
breakdown, material and group counts, bounding box, surface area,
and triangulation status.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/openWorld traits. The description adds crucial behavioral context not in annotations: the x402 payment requirement (USDC on Base), the specific return structure (counts, breakdowns, surface area), and privacy policy implications. It omits error behaviors for invalid files or failed payments.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description exhibits excellent structure: purpose (sentence 1), return values (sentence 2), payment prerequisite (sentence 3), related tool pointer (sentence 4), and compliance link (sentence 5). Information is front-loaded with zero redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (payment processing, dual input methods, external URL fetching) and absence of output schema, the description adequately covers return values and critical payment prerequisites. It appropriately delegates payment workflow details to format.auto rather than duplicating them.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, comprehensive details (e.g., 'Use for files under 200 KB') are already present in the schema. The description provides high-level context that file inputs should be OBJ but does not augment parameter semantics beyond the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool extracts metadata from OBJ files and enumerates specific outputs (vertex/normal counts, bounding box, triangulation status). It distinguishes itself from sibling format.* tools by explicitly specifying OBJ format scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description references format.auto for payment flow details, establishing a relationship between tools. However, it lacks explicit guidance on when to use format.obj versus format.detect (for unknown formats) or versus format.batch, and does not clarify selection criteria between file_b64 and file_url beyond the schema descriptions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.pcdPCD AnalysisARead-onlyIdempotentInspect
Extract metadata from a PCD point cloud file.
Returns point count, field definitions, data format, organization,
viewpoint, feature flags (RGB, intensity, normals, curvature),
bounding box, centroid, and point density estimate.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent/open-world traits. The description adds critical behavioral context not in annotations: payment requirement (x402/USDC), specific return value structure (bounding box, centroid, etc.), and privacy policy link. Does not mention rate limits or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste. Front-loaded with purpose and return values. Payment requirement and sibling reference follow logically. Privacy policy link is appropriately placed at the end.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness given no output schema exists. The detailed enumeration of returned metadata (point count, feature flags, etc.) fully compensates for the missing output schema. Payment requirements and privacy policy provide necessary operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'Payment required' which contextually supports the payment parameter, but does not elaborate on the file input options (file_b64 vs file_url) or the retry workflow detailed in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Extract metadata from a PCD point cloud file') and clearly distinguishes itself from siblings by specifying the PCD format. It enumerates the exact metadata fields returned, further clarifying scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly references sibling tool format.auto for 'payment flow details,' guiding users to the correct tool for payment setup versus analysis. Clear that this is for PCD-specific analysis. Could improve by contrasting with format.detect or format.batch for format-agnostic workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.plyPLY AnalysisARead-onlyIdempotentInspect
Extract metadata from a PLY file (ASCII or binary).
Returns vertex/face counts, element properties, feature detection
(normals, colors, texcoords, intensity, curvature), bounding box,
centroid, and point cloud identification.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true, which the description respects by framing the operation as 'Extract' (read-only). The description adds crucial behavioral context not in annotations: the x402 payment mechanism (USDC on Base), support for both ASCII and binary PLY variants, and detailed disclosure of what the tool returns (vertex counts, feature detection, etc.).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: purpose first, return values second, payment requirements third, and cross-reference last. The privacy policy URL, while legally necessary, slightly reduces conciseness for tool selection purposes, but every other sentence directly aids invocation or behavioral understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by enumerating the return values (counts, bounding box, features). It addresses the payment complexity (rare for MCP tools) and multiple input methods (URL vs base64). Minor gap: it doesn't describe error scenarios beyond payment requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already fully documents all four parameters (payment, file_b64, file_url, filename), including constraints like the 200KB limit for base64. The description mentions 'ASCII or binary' which contextually supports the file input parameters but does not add significant semantic detail beyond the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Extract') and resource ('metadata from a PLY file'), clearly defining the tool's scope. It distinguishes itself from sibling format.* tools by specifying PLY format support and explicitly mentioning 'format.auto' for payment flows, clarifying the division of responsibilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool (analyzing PLY files) and explicitly references 'format.auto' as the alternative for understanding payment flows. However, it lacks explicit 'when-not-to-use' guidance regarding the other format-specific siblings (e.g., format.gltf).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
format.stlSTL AnalysisARead-onlyIdempotentInspect
Extract metadata from an STL file (ASCII or binary).
Returns triangle count, bounding box, surface area, volume,
manifold analysis (watertight, open edges, non-manifold edges),
triangle quality metrics, vertex deduplication count, mean edge
length, minimum bounding sphere, and a noise estimate derived
from planar region fitting.
Payment required via x402 (USDC on Base). See format.auto
for payment flow details. Privacy policy: https://caliper.fit/privacy| Name | Required | Description | Default |
|---|---|---|---|
| payment | No | x402 payment proof as a JSON string. Set this when retrying after a payment_required response. Must contain 'transaction' (on-chain tx hash), 'network', and 'priceToken' from the payment_required details. | |
| file_b64 | No | Base64-encoded file content. Use for files under 200 KB. | |
| file_url | No | HTTP/HTTPS URL of the geometry file to analyze. Preferred for large files. | |
| filename | No | Original filename with extension, used for format detection when providing file_b64. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/openWorld; description adds critical behavioral context not in structured data: payment mechanism (x402/USDC on Base), comprehensive output value list (bounding box, manifold analysis, noise estimate), and privacy policy URL. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Logically structured: purpose → return values → payment requirements → references. Front-loaded with action and resource. The enumerated return values are lengthy but information-dense and necessary given lack of output schema. No redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a complex file analysis tool: compensates for missing output schema by detailing 10+ specific metrics returned. Covers authentication/payment requirements (x402), file format variants, and privacy compliance. Adequate for an agent to predict tool behavior and outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions (payment proof structure, file size thresholds for b64 vs URL). Description does not add parameter semantics beyond schema, relying on structured documentation for the 4 parameters (payment, file_b64, file_url, filename).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Extract metadata' with explicit resource 'STL file' and format variants 'ASCII or binary'. Clearly distinguishes from siblings (format.obj, format.gltf, etc.) by targeting STL specifically and enumerating geometry-specific outputs (triangle count, manifold analysis) unique to mesh analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides prerequisite guidance ('Payment required via x402') and explicitly references sibling tool 'format.auto' for payment flow details. Lists privacy policy. Does not explicitly differentiate when to use vs format.detect or format.batch, though STL specificity provides implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!