Skip to main content
Glama

BringYour

Server Details

Move your AI agent harness between 13 tools, or bootstrap one from GitHub. $49 lifetime.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: bootstrap_from_github creates starter harnesses from GitHub profiles, install_harness generates installation files, list_targets provides metadata about supported tools, and migrate_harness converts between harness formats. The descriptions clearly differentiate their functions, making misselection unlikely.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with clear, descriptive names: bootstrap_from_github, install_harness, list_targets, and migrate_harness. The naming convention is uniform throughout, using snake_case and action-oriented verbs that accurately reflect each tool's function.

Tool Count5/5

Four tools is well-scoped for a harness management server, covering the complete lifecycle: discovery (list_targets), creation (bootstrap_from_github), installation (install_harness), and migration (migrate_harness). Each tool earns its place without redundancy, making the set efficient and focused.

Completeness5/5

The tool surface provides complete coverage for harness management: it supports discovery of targets, creation from scratch or GitHub profiles, installation into tools, and migration between formats. There are no obvious gaps—agents can perform all essential operations without dead ends, including handling edge cases like paste-only sources.

Available Tools

4 tools
bootstrap_from_githubAInspect

Generate a starter harness from a user's public GitHub profile. Reads their repos, topics, commit style, and pinned projects, synthesizes a first-draft CLAUDE.md / AGENTS.md / rules file for the chosen target. The agent writes the returned files to the user's machine. No OAuth required; uses unauthenticated public-repo data. Optional gh_token arg increases GitHub rate limits.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget tool. Same 13 options as install_harness.
gh_tokenNoOptional GitHub PAT for higher rate limits. Send only if the user explicitly supplied one; don't ask for it by default.
github_usernameYesGitHub handle without the @, e.g. 'torvalds'. Must have public repos.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behaviors: it writes files to the user's machine (output action), requires no OAuth (authentication needs), uses public data only (data scope), and mentions rate limit implications with the optional token. It doesn't cover error handling or file overwrite behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core purpose, second explains the output action and authentication, third covers the optional parameter. Every sentence adds essential information with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, no annotations, and no output schema, the description is adequate but has gaps. It covers the main action and behavioral context well, but doesn't explain what the 13 'to' options are, what the output files contain, or how errors are handled. Given the complexity, it should provide more guidance on the target parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema: it clarifies that 'gh_token' increases rate limits and should only be used if explicitly supplied, but doesn't explain the 'to' parameter's 13 options or provide examples for 'github_username'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generate a starter harness'), the source ('from a user's public GitHub profile'), and the output ('CLAUSE.md / AGENTS.md / rules file'). It distinguishes this tool from siblings like 'install_harness' by focusing on creation from GitHub data rather than installation or listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to create starter files from GitHub data) and mentions an optional parameter for rate limits. However, it doesn't explicitly state when NOT to use it or compare it to alternatives like 'install_harness' for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

install_harnessAInspect

Generate the files needed to install a minimal BringYour harness into a specific AI tool. Returns a map of relative file paths → content. The calling agent should write these files to the appropriate directory on the user's machine (e.g., $HOME/.claude for Claude Code, .cursor/ at the project root for Cursor). Does NOT touch the caller's filesystem — the agent does that.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetYesTarget tool. One of: claude-code, cursor, codex, openclaw, aider, continue, cline, goose, zed, roo-code, chatgpt, claude-ai, copilot. Call list_targets first if unsure.
identityNoOptional short self-description the user wants baked into the harness (e.g., 'Go backend engineer, prefers terse explanations, dislikes comments that explain what code does'). If empty, a neutral starter template is emitted.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it generates files but 'Does NOT touch the caller's filesystem — the agent does that,' clarifies the output format ('map of relative file paths → content'), and explains the agent's responsibility for writing files. It doesn't cover rate limits or error handling, but provides essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by output details and behavioral constraints. Every sentence earns its place: first states what it does, second describes output format, third explains agent responsibility, and fourth clarifies what it doesn't do. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by explaining the output format and behavioral constraints. It covers the tool's purpose, usage context, and limitations. However, it could be more complete by mentioning error cases or prerequisites, though the reference to list_targets helps mitigate this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add meaning beyond what the schema provides about 'target' or 'identity' parameters. Baseline 3 is appropriate since the schema does the heavy lifting, though the description could have elaborated on parameter interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Generate the files needed to install a minimal BringYour harness') and resource ('into a specific AI tool'). It distinguishes from siblings by specifying it generates installation files rather than bootstrapping from GitHub, listing targets, or migrating harnesses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('install a minimal BringYour harness into a specific AI tool') and references list_targets for target selection. However, it doesn't explicitly state when NOT to use it or compare with alternatives like bootstrap_from_github for different installation methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_targetsAInspect

List the 13 AI tools BringYour can produce harness files for, with each target's read/write/paste capability and brief description. Call this first to discover what 'target' values install_harness accepts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by specifying the exact scope (13 AI tools), the structured information returned (capabilities and descriptions), and the purpose within the workflow. It doesn't mention potential limitations like rate limits or authentication requirements, but provides substantial behavioral context for a read-only discovery tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are perfectly front-loaded: the first describes exactly what the tool returns, the second provides crucial usage guidance. Every word earns its place with zero redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter discovery tool with no annotations and no output schema, the description provides excellent context about what information will be returned and how it fits into the broader workflow. The only minor gap is not explicitly describing the output format/structure, but given this is a simple list tool, the description is quite complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline would be 3. However, the description adds value by explicitly stating there are no parameters needed ('Call this first' implies no configuration required), which helps the agent understand this is a simple discovery call with no input requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('13 AI tools BringYour can produce harness files for'), with specific details about what information is included (target's read/write/paste capability and brief description). It explicitly distinguishes this tool from its sibling 'install_harness' by explaining the relationship between them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this first to discover what 'target' values install_harness accepts'), creating a clear workflow relationship with a specific sibling tool. It establishes a prerequisite relationship rather than just describing functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

migrate_harnessAInspect

Translate an existing harness from one tool's format to another's. The calling agent reads the user's source harness files locally (e.g. from ~/.claude/ or .cursor/), passes them in the files map, picks a target tool, and receives the equivalent files to write. Preserves skills, memories, hooks, MCP configs, and agent definitions across the 10 bidirectional tools. Paste-only sources (chatgpt, claude-ai) cannot be read — use install_harness or bootstrap_from_github instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget tool. Any of the 13 (bidirectional + paste + copilot).
fromYesSource tool. One of the 10 bidirectional targets: claude-code, cursor, codex, openclaw, aider, continue, cline, goose, zed, roo-code.
filesYesMap of {relative_path: content} capturing the source harness as the agent read it from the user's machine. Paths must be relative to the harness root (e.g. 'CLAUDE.md', 'skills/foo/SKILL.md'). Binary files are not supported.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the agent must read files locally and pass them in, the tool preserves specific components (skills, memories, hooks, MCP configs, agent definitions), binary files are not supported, and it works across 10 bidirectional tools. However, it doesn't mention error handling, performance characteristics, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. Each subsequent sentence adds necessary context about constraints and alternatives. While efficient, the final sentence about paste-only sources could be slightly more concise by integrating it with the earlier tool scope explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex migration tool with no annotations and no output schema, the description does well by explaining the transformation process, file handling requirements, tool compatibility constraints, and exclusions. It covers the essential context needed to understand what the tool does and when to use it, though it doesn't describe the output format or potential error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds some context about the 'files' parameter (agent reads from user's machine, paths relative to harness root) and clarifies the scope of 'from' and 'to' parameters (10 bidirectional tools vs. 13 total including paste-only). This provides marginal value beyond the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Translate an existing harness'), identifies the resource ('from one tool's format to another's'), and distinguishes from siblings by mentioning paste-only sources cannot be read and directing to install_harness or bootstrap_from_github instead. It explicitly names the 10 bidirectional tools versus paste-only sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for translating between the 10 bidirectional tools) and when not to use it (for paste-only sources like chatgpt, claude-ai). It names specific alternatives (install_harness, bootstrap_from_github) for those excluded cases, giving clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources