Skip to main content
Glama

Server Details

Move your AI agent harness between 13 tools, or bootstrap one from GitHub. $49 lifetime.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: install CLI commands, list targets, preview build setup, and preview move. There is no overlap or ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun snake_case pattern (install_local_cli, list_targets, preview_build_setup, preview_move), making them predictable.

Tool Count5/5

With only 4 tools, the server is well-scoped for its purpose of providing installation and preview functionality without unnecessary bloat.

Completeness4/5

The tool set covers the main preview and listing operations, but lacks actual execution tools for moving or building, which are only previewed. This is a minor gap given the stated intent.

Available Tools

7 tools
install_local_cliAInspect

Return no-data local install and MCP wiring commands. The remote server does not install anything and does not receive harness data.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_agentNoOptional: claude-code or codex.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses that the remote server does not install anything and receives no harness data, which clarifies non-destructive, local-only behavior. However, it could detail what the returned commands look like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The first sentence states the core function, the second clarifies scope. Perfectly sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 optional parameter, no output schema), the description is complete. It adequately explains the tool's purpose and behavior without requiring additional details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single optional parameter. The description adds no further meaning beyond what the schema already provides for 'target_agent'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns local install and MCP wiring commands, using specific verb 'return' and resource 'no-data local install and MCP wiring commands'. It distinguishes from siblings like list_targets and preview_build_setup by focusing on command generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings or alternatives. The description lacks explicit when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_productsAInspect

List agent-readable Bring Your AI products and supported payment modes. No harness data is accepted.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It explicitly states 'No user data is accepted', which implies a read-only, safe operation. This is sufficient for a parameterless list tool with no destructive potential.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The description is optimally concise and front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately explains what the tool returns (list of products and payment modes). It is complete for its simplicity, though a note on format or example could push to 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so the description need not add parameter semantics. The description does not repeat schema info, and baseline for 0-parameter tools is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists 'Bring Your AI products and supported payment modes' with a specific verb 'List' and a distinct resource. It differentiates from siblings like 'list_targets' by specifying the product scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implicitly indicates usage for listing products with no input, it lacks explicit guidance on when to use this tool versus alternatives or any exclusions. The 'No user data is accepted' is a constraint but not usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_targetsAInspect

List Bring Your AI target tools. No harness data is accepted or returned.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by specifying the exact scope (13 AI tools), the structured information returned (capabilities and descriptions), and the purpose within the workflow. It doesn't mention potential limitations like rate limits or authentication requirements, but provides substantial behavioral context for a read-only discovery tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are perfectly front-loaded: the first describes exactly what the tool returns, the second provides crucial usage guidance. Every word earns its place with zero redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter discovery tool with no annotations and no output schema, the description provides excellent context about what information will be returned and how it fits into the broader workflow. The only minor gap is not explicitly describing the output format/structure, but given this is a simple list tool, the description is quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline would be 3. However, the description adds value by explicitly stating there are no parameters needed ('Call this first' implies no configuration required), which helps the agent understand this is a simple discovery call with no input requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('13 AI tools BringYour can produce harness files for'), with specific details about what information is included (target's read/write/paste capability and brief description). It explicitly distinguishes this tool from its sibling 'install_harness' by explaining the relationship between them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this first to discover what 'target' values install_harness accepts'), creating a clear workflow relationship with a specific sibling tool. It establishes a prerequisite relationship rather than just describing functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_build_setupBInspect

Free no-data preview for building a user's first Claude Code or Codex setup. Does not accept GitHub handles, generated memories, mappings, or file content.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget tool: claude-code or codex.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description indicates the tool is safe and non-destructive by stating 'Free no-data preview' and listing what it does not accept. However, it fails to describe any side effects, permissions, or output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: one sentence for purpose and one for exclusions. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers the core purpose and constraints adequately. It could mention whether any prior setup is needed, but overall it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema's description for the 'to' parameter is clear. The description does not add additional meaning beyond the schema, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a free, no-data preview for building a user's first Claude Code or Codex setup, which aligns with the tool name. However, it does not explicitly distinguish from sibling tools like preview_move, leaving some ambiguity about when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists exclusions (no GitHub handles, memories, etc.) but provides no explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites or ideal scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_moveAInspect

Free no-data preview for moving a harness between Claude Code and Codex. Returns feasibility copy only. Does not accept or return mappings, file paths, generated content, or validation notes.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget tool: claude-code or codex.
fromYesSource tool: claude-code or codex.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description effectively discloses that the tool is a free preview that returns only feasibility copy and does not accept or return mappings, file paths, or validation notes. This clearly sets behavioral boundaries, though it could explicitly state it is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences. The first sentence immediately states the core purpose, and the second sentence lists exclusions. Every sentence is essential, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description only says 'Returns feasibility copy only,' which is vague about the exact return format. It could specify whether it returns a boolean, string, or structured data. The rest is adequate for a simple preview tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already documents both parameters with descriptions, giving 100% coverage. The description adds value by explaining that the tool moves a harness between tools and that parameters are just tool names, not file paths or mappings, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is a free no-data preview for moving a harness between Claude Code and Codex, using specific verbs and resources. It distinguishes itself from sibling tools like install_local_cli, list_targets, and preview_build_setup, which handle different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives like preview_build_setup or install_local_cli. It lacks context on prerequisites or exclusions, leaving the agent to infer usage solely from the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quote_lifetime_licenseBInspect

Quote a Bring Your AI lifetime license in USD. Human checkout uses Stripe Payment Links; agent checkout can settle a Link-issued Stripe shared payment token.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoOptional BCP 47 locale.
currencyNoOptional requested currency. Only USD is currently supported.
product_idNoOptional product id. Defaults to bringyour_founder_lifetime.
buyer_countryNoOptional ISO country code.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions SPT and x402 are reported as unavailable until enabled, but lacks details on response format, side effects, or required permissions. With no annotations, more behavioral context is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and an important behavioral note, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple tool with well-described optional parameters, but missing return value details and usage context; no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter described, so the description does not add additional parameter meaning beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Quote' and the resource 'Bring Your AI lifetime license in USD', which is distinct from sibling tools like 'list_products' or 'start_checkout'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives; it does not specify prerequisites or context for quoting.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_checkoutAInspect

Start checkout. With payment_mode=stripe_spt plus shared_payment_granted_token and buyer_email, settles a Stripe PaymentIntent and returns the signed license without opening a browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoAlias for buyer_email.
product_idNoOptional product id. Defaults to bringyour_founder_lifetime.
buyer_emailNoEmail bound to the issued license. Required for stripe_spt settlement.
payment_modeNostripe_payment_link, stripe_link, link, stripe_checkout, stripe_acp, stripe_spt, or x402.
shared_payment_granted_tokenNoLink-issued Stripe shared payment token. Required for stripe_spt settlement.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description adds valuable behavioral context: it discloses the Stripe Payment Link fallback and that SPT/x402 are unavailable until private-preview access is enabled. This helps set expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, information is front-loaded and efficiently conveys the tool's purpose and key return behaviors.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 optional parameters and no output schema, the description provides sufficient context: purpose, return type, and special mode behavior. Slightly lacking on error handling or next steps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so the baseline is 3. The description does not add additional meaning beyond what is already in the schema (e.g., defaults are already in schema descriptions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool starts a structured checkout handoff and specifies return values (Stripe Payment Link fallback). It is distinct from sibling tools like preview_build_setup or quote_lifetime_license.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for starting checkout with different payment modes but lacks explicit when-to-use or when-not-to-use guidance compared to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources