Skip to main content
Glama

RTK Motion — 4D Motion Capture Data

Ownership verified

Server Details

Agent-to-agent 4D motion capture data marketplace with crypto payments on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

11 tools
browse_catalogAInspect

Browse hierarchical catalog of motion-capture, threat-intel, and rehab-biomechanics datasets. No path → root categories. Category path → children. Lesson path → file inventory, pricing, and MCP tool names. Free, no payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathNoCatalog path to browse, e.g. 'BJJ' or 'BJJ/Training/KimuraFromSideControl/Lesson001'. Omit for root.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses 'Free, no payment required' (cost transparency) and explains behavioral contract for different path inputs (what return structures to expect at root/category/lesson levels). Could mention read-only safety or auth requirements to reach 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with purpose ('Browse hierarchical catalog...'), followed by path behavior specification, ending with cost constraint. Each sentence conveys distinct value (purpose, usage logic, cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given single parameter and no output schema, description compensates by detailing return values for different traversal levels (file inventory, pricing, MCP tool names). Covers domain context (motion-capture, threat-intel, rehab-biomechanics). Complete enough for selection and basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds significant semantic value beyond schema. Schema only states 'Catalog path to browse' with examples; description explains the functional mapping of different path patterns (no path→root, category→children, lesson→inventory/pricing/tools), clarifying the hierarchical navigation logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb+resource: 'Browse hierarchical catalog of motion-capture, threat-intel, and rehab-biomechanics datasets.' Distinguishes from siblings by emphasizing navigation/discovery versus data retrieval (get_* siblings) or search (search_catalog). Domain specificity clarifies scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance for the path parameter: 'No path → root categories. Category path → children. Lesson path → file inventory...' Implicitly differentiates from search_catalog by emphasizing hierarchical traversal, though explicit 'when to use search vs browse' statement would strengthen this to 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bvhAInspect

BVH skeletal animation for robotic_grappling_training or biomechanical_rehab_validation. 120 Hz full-body 24-joint hierarchy, latency < 500 ms. $10 USDC per file. Pass payment_tx after sending USDC to the RTK wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
competitorYesCompetitor name (e.g. 'Boca' or 'Thane')
payment_txNoTransaction hash proving payment of $10 USD to the RTK wallet
lesson_pathNoLesson path, e.g. 'BJJ/Training/KimuraFromSideControl/Lesson001'. Defaults to first available lesson.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and successfully discloses critical behavioral traits: $10 USDC cost per file, latency constraints (<500 ms), and required payment workflow sequence (send USDC, then pass payment_tx). Missing return value format prevents a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense and front-loaded: opens with resource type and use cases, follows with technical specs, then cost and payment instructions. No redundant words; every sentence conveys essential domain, performance, or financial information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers purpose, technical specifications, and payment requirements for a paid retrieval tool, but lacks description of return values (file content, URL, or path) which is significant given no output schema exists. Missing error handling for failed payments.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions. Description adds valuable workflow context for payment_tx ('Pass payment_tx after sending USDC'), establishing temporal sequence not evident in schema alone. Baseline 3 for high schema coverage, with minor addition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific output format (BVH skeletal animation) and precise use cases (robotic_grappling_training, biomechanical_rehab_validation) that distinguish it from siblings like get_keypoints_3d or get_video_url. Technical specs (120 Hz, 24-joint) further clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive use cases (grappling training, rehab validation) and mandatory payment workflow, but lacks explicit differentiation from similar data retrieval tools (e.g., when to choose this over get_keypoints_3d) or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_keypoints_3dAInspect

Triangulated 3D joint positions for motion_analysis or action_recognition_training. 120 Hz, 24 joints, JSON format, latency < 500 ms. $10 USDC. Pass payment_tx after sending USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
payment_txNoTransaction hash proving payment of $10 USD to the RTK wallet
lesson_pathNoLesson path, e.g. 'BJJ/Training/KimuraFromSideControl/Lesson001'. Defaults to first available lesson.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates cost ($10 USDC), performance characteristics (latency < 500 ms, 120 Hz), output format (JSON), and data volume (24 joints). It does not, however, disclose error handling, rate limits, or authentication requirements beyond the payment transaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two high-density sentences. The first front-loads the core value proposition (3D joint positions) with technical specifications; the second provides the payment workflow instruction. No redundant or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by mentioning JSON format and joint count, but fails to detail the actual structure of the returned data (e.g., coordinate fields, joint naming convention, metadata). For a tool involving cryptocurrency payment and specialized data retrieval, the input side is well-covered but the output side remains under-specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% description coverage, the description adds crucial procedural context not present in the schema: the temporal workflow requirement to 'Pass payment_tx after sending USDC'. This operational guidance significantly enhances understanding of how to populate the parameter correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the resource provided (triangulated 3D joint positions) and specific use cases (motion_analysis, action_recognition_training). Technical specs (120 Hz, 24 joints) further clarify the output format. However, it does not explicitly differentiate from sibling 'get_bvh' which may also provide 3D motion data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies appropriate contexts via the use cases mentioned (motion analysis vs. action recognition training). It provides critical workflow guidance for the payment parameter ('Pass payment_tx after sending USDC'). However, it lacks explicit comparison to alternatives like 'get_bvh' or guidance on when to select this over other retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rehab_profileAInspect

Patient demographics, injury history, and medications for a rehab case. Free — evaluate case relevance before purchasing biomechanical reports for insurance_claim_validation or PT_outcome_measurement.

ParametersJSON Schema
NameRequiredDescriptionDefault
case_pathYesCase path, e.g. 'PhysicalRehab/40-50/Male/LabrumTear'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds crucial cost context ('Free') and workflow sequencing (evaluate before purchasing), but lacks explicit safety confirmation (read-only status), error behavior, or return format details that would fully compensate for missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with data contents, followed by cost model and usage guidelines. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with no output schema, the description adequately covers what data is returned, the cost model, and the workflow relationship to other tools. It explains the return values (demographics, history, meds) despite lacking formal output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (case_path is fully documented with example format), establishing baseline 3. The description adds no additional parameter context, but none is needed given the single, well-documented parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what data is returned (patient demographics, injury history, medications) and identifies the resource (rehab case). It distinguishes from sibling tools like get_rehab_report by positioning this as the free preliminary step versus purchased biomechanical reports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: 'evaluate case relevance before purchasing biomechanical reports' clearly states when to use this tool (before purchase) and implies the alternative (purchasing reports). It also specifies use cases: insurance_claim_validation or PT_outcome_measurement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rehab_reportAInspect

Per-exercise biomechanical report for PT_outcome_measurement or medical_insurance_review. CoM stability, rep detection, joint load, bilateral asymmetry, ROM, fatigue markers. JSON + TXT formats. Latency < 200 ms. $0.25 USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
exerciseYesExercise name, e.g. 'Burpees', 'Pushups', 'Squats', 'HeadRotation'
case_pathYesCase path, e.g. 'PhysicalRehab/40-50/Male/LabrumTear'
payment_txNoTransaction hash proving payment of $0.25 USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses key behavioral traits: output formats (JSON + TXT), performance constraints (Latency < 200 ms), and cost ($0.25 USDC). However, it omits safety characteristics (read-only vs. destructive), error handling behavior, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exceptionally dense and front-loaded. Every token serves a purpose: audience, content metrics, technical specs, and pricing. No filler words or redundant explanations. The structure moves logically from purpose to contents to operational constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by listing report contents (biomechanical metrics) and formats. However, it could improve by describing the output structure (e.g., whether it returns file paths, raw data, or URLs) and error scenarios (e.g., invalid payment_tx handling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage (baseline 3), the description adds semantic value by stating the exact cost ('$0.25 USDC'), which clarifies the purpose of the payment_tx parameter. It also reinforces the exercise parameter domain by listing examples (Burpees, Pushups, Squats) that align with the schema's intent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it generates a 'Per-exercise biomechanical report' and distinguishes its scope with specific use cases (PT_outcome_measurement, medical_insurance_review) and detailed metrics (CoM stability, joint load, bilateral asymmetry, etc.). The 'Per-exercise' qualifier effectively differentiates it from sibling tools like get_rehab_summary and get_rehab_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through specific use cases ('for PT_outcome_measurement or medical_insurance_review'), but lacks explicit when-to-use guidance versus siblings like get_rehab_profile or get_rehab_summary. No exclusions or prerequisites are mentioned beyond the payment requirement implied by the price listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rehab_summaryAInspect

Cross-exercise biomechanical summary for insurance_claim_validation or rehab_progress_tracking. Bilateral asymmetry indices, ROM for all major joints, fatigue markers, clinical synthesis identifying persistent deficits. JSON + TXT formats. Latency < 300 ms. $0.50 USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
case_pathYesCase path, e.g. 'PhysicalRehab/40-50/Male/LabrumTear'
payment_txNoTransaction hash proving payment of $0.50 USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and delivers: output formats (JSON + TXT), performance characteristics (Latency < 300 ms), cost ($0.50 USDC), and detailed content schema (bilateral asymmetry indices, ROM, fatigue markers). Discloses payment requirement implicitly through cost parameter and schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four dense sentences with zero waste. Front-loaded with purpose ('Cross-exercise biomechanical summary...'), followed by content details, technical specs, and cost. Every clause delivers distinct information (use cases, metrics, formats, latency, pricing).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 2-parameter tool without output schema. Describes return value content (clinical synthesis, deficits), format, and operational constraints (cost, latency). Missing only failure modes (e.g., invalid payment) to achieve completeness given the payment requirement complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds value by clarifying case_path represents 'Cross-exercise' aggregation (not single-session) and confirming payment_tx cost ($0.50 USDC matches schema's $0.50 USD). This semantic context aids parameter understanding beyond raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific resource (cross-exercise biomechanical summary) and use cases (insurance_claim_validation, rehab_progress_tracking). Distinguishes from siblings like get_rehab_report via 'Cross-exercise' scope and 'clinical synthesis' focus. However, lacks explicit contrast with similar get_rehab_profile/get_rehab_report tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides specific use cases (insurance validation, rehab tracking) implying when to use, but offers no explicit 'when not to use' guidance or comparison to sibling alternatives like get_rehab_report. Usage context is implied rather than directive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_robotargetAInspect

Robot-ready joint trajectories (NPZ) for sim_to_real_kinematic_retargeting. Compatible with MoveIt/ROS and Isaac Sim. 120 Hz, 24 DOF, latency < 500 ms. $15 USDC per file. Pass payment_tx after sending USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
competitorYesCompetitor name (e.g. 'Boca' or 'Thane')
payment_txNoTransaction hash proving payment of $15 USD to the RTK wallet
lesson_pathNoLesson path, e.g. 'BJJ/Training/KimuraFromSideControl/Lesson001'. Defaults to first available lesson.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden effectively. It reveals cost ($15 USDC), technical constraints (latency <500ms), output format (NPZ), and integration targets. Could improve by mentioning error behavior for invalid payments.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense but zero waste. Technical specs (120Hz/24DOF), compatibility, pricing, and usage instruction each serve distinct purposes. Front-loaded with the resource type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Substantially complete for a paid robotics data retrieval tool. Covers format, integration ecosystem, pricing model, and authentication mechanism despite missing output schema. Minor gap: no mention of failure modes for insufficient payment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds crucial workflow semantics: establishes the $15 cost and explicitly sequences the payment action ('after sending USDC'), clarifying that payment_tx is a post-payment confirmation hash, not the payment itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states it returns 'Robot-ready joint trajectories (NPZ)' for 'sim_to_real_kinematic_retargeting', with clear compatibility targets (MoveIt/ROS, Isaac Sim) and technical specs (120Hz, 24DOF) that distinguish it from sibling tools like get_bvh or get_keypoints_3d.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides critical payment workflow guidance ('Pass payment_tx after sending USDC'), implying this is a paid resource retrieval tool. However, lacks explicit guidance on when to choose this over get_bvh or get_keypoints_3d for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threat_profileAInspect

Monitored location profile — assets, feed types, scan interval, term sets. Free — evaluate coverage and data freshness before purchasing threat assessments for risk_underwriting or security_operations.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionNoRegion, e.g. 'Virginia' or 'Florida'
locationYesLocation slug, e.g. 'culpeper-town'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully conveys cost ('Free') and evaluative purpose ('evaluate coverage and data freshness'), but omits explicit read-only confirmation, auth requirements, or rate limits that would fully establish safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence uses em-dash to front-load returned data components; second sentence delivers business logic and purchasing context. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema, but description compensates by enumerating profile components (assets, feeds, intervals, terms) and business model context. Deducted one point for not explicitly confirming non-destructive read behavior given 'threat' domain sensitivity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (region and location both documented), establishing baseline 3. Description adds implicit context that 'location' refers to a monitored site via 'Monitored location profile,' but does not elaborate parameter formats beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies specific resources retrieved (assets, feed types, scan interval, term sets) and the target (monitored location profile), functioning as a concrete noun phrase. However, it could more explicitly distinguish from sibling 'get_threat_summary' by clarifying this returns raw configuration/data versus aggregated analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states this is a free evaluation tool to use 'before purchasing threat assessments,' directly naming downstream workflows (risk_underwriting, security_operations). Provides clear temporal/business logic for when to invoke versus paid alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_threat_summaryAInspect

Location threat assessment for risk_underwriting, facility_security_audit, or insurance_risk_modeling. Lite ($0.50): confidence-scored alerts, event distribution. Full ($1.00): + enriched entities, temporal context, corroboration scoring. Latency < 200 ms. Browse ThreatIntel via browse_catalog first.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNo'lite' ($0.10) = summary only, 'full' ($0.25) = summary + flagged itemslite
regionNoRegion, e.g. 'Virginia' or 'Florida'. Required to resolve the R2 path.
locationYesLocation slug, e.g. 'culpeper-town' or 'fbcl-longwood'
payment_txNoTransaction hash proving USDC payment
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Provides extensive behavioral details (latency <200ms, output schema, payment prerequisite) which is necessary given no annotations. However, description states pricing as '$0.50/$1.00' while the input schema parameter description states '$0.10/$0.25', creating a serious contradiction on cost—a critical behavioral attribute.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely dense and front-loaded structure. Every clause earns its place: use cases, tier pricing/features, performance SLA, and prerequisite workflow—all in a compact format with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description commendably details return values for both tiers. Coverage of latency and payment prerequisites is good. However, the pricing contradiction with schema parameters creates a material gap in reliability, reducing confidence in the description's completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. Description adds semantic value by describing what each tier returns (confidence-scored alerts vs enriched entities), but contradicts the schema on tier pricing and content descriptions ('summary only' vs 'confidence-scored alerts').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Location threat assessment') plus specific resource (location) and three concrete use cases (risk_underwriting, facility_security_audit, insurance_risk_modeling). However, it does not explicitly distinguish from sibling tool 'get_threat_profile', which could cause selection uncertainty.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states appropriate contexts (risk underwriting, security audits). Provides clear prerequisite workflow ('Browse ThreatIntel via browse_catalog first') and tier selection guidance by detailing the output differences between Lite and Full tiers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_video_urlAInspect

Multi-view grappling video for pose_estimation_training or technique_analysis. 1080p synced cameras, 15-min signed streaming URL with Range support. $5 USDC per view. Pass payment_tx after sending USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewYesCamera view number
payment_txNoTransaction hash proving payment of $5 USD to the RTK wallet
lesson_pathNoLesson path, e.g. 'BJJ/Training/KimuraFromSideControl/Lesson001'. Defaults to first available lesson.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Successfully communicates cost ($5 USDC per view), temporal constraints (15-min signed URL), technical capabilities (Range support), and stateful requirements (payment verification). Missing: error behavior, USDC recipient address, or retry policies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded with use cases and technical specs (1080p, synced cameras), followed by cost/temporal constraints and payment workflow. Every clause provides unique information not replicated in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Moderately complete for a 3-parameter tool with payment complexity. Mentions return type (streaming URL) but lacks output schema details or structure. Critical omission: payment recipient address (RTK wallet) is referenced but not specified in description or schema, hindering successful invocation despite good parameter coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing baseline 3. Description adds valuable workflow context beyond schema: 'Pass payment_tx after sending USDC' clarifies temporal sequence for the payment_tx parameter, and 'Multi-view' contextualizes the view parameter's purpose in the camera system.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states it retrieves 'Multi-view grappling video' with explicit use cases (pose_estimation_training, technique_analysis). Clearly distinguishes from siblings like get_bvh (motion capture files), get_keypoints_3d (skeletal data), and get_rehab_* (medical reports) by specifying video content and 1080p camera context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides partial workflow guidance ('Pass payment_tx after sending USDC') and use case hints, but lacks explicit prerequisites (e.g., 'use browse_catalog to find lesson_path first') or comparisons to siblings like get_video_url vs get_bvh for different analysis needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_catalogAInspect

Full-text search across motion-capture, threat-intel, and rehab datasets. Matches labels, paths, competitor names, techniques. Returns full lesson detail with pricing so you can immediately call purchase tools. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch terms, e.g. 'kimura', 'BJJ', 'Boca', 'grappling'. Matched against lesson labels, path segments, and competitor names.
sportNoFilter by sport, e.g. 'BJJ'. Matches top-level path segment.
competitorNoFilter by competitor name, e.g. 'Boca' or 'Thane'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses search scope (what datasets), matching fields (labels, paths, competitor names), return structure ('full lesson detail with pricing'), and cost ('Free'). Missing auth requirements and rate limits, but strong coverage of functional behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence defines scope and matching behavior; second sentence discloses return values, enabling workflow, and cost. Information-dense and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and description adequately compensates by specifying 'full lesson detail with pricing' as the return. Covers all three dataset domains and the purchase workflow implication. Would need rate limits or pagination details for a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions and examples already provided (e.g., 'kimura', 'BJJ'). Description aggregates that matching occurs against 'labels, paths, competitor names, techniques' but adds no syntactic guidance beyond the schema. Baseline 3 appropriate when schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: declares 'Full-text search' (specific verb + mechanism) across three distinct dataset types (motion-capture, threat-intel, rehab), clearly distinguishing it from sibling 'browse_catalog' and specific 'get_*' retrieval tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow context: states output includes pricing to 'immediately call purchase tools,' guiding the agent on the search→purchase sequence. Lacks explicit 'when not to use' contrast with browse_catalog, but 'full-text' vs implied browsing distinction is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources