BringYour
Server Details
Move your AI agent harness between 13 tools, or bootstrap one from GitHub. $49 lifetime.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: bootstrap_from_github creates starter harnesses from GitHub profiles, install_harness generates installation files, list_targets provides metadata about supported tools, and migrate_harness converts between harness formats. The descriptions clearly differentiate their functions, making misselection unlikely.
All tools follow a consistent verb_noun pattern with clear, descriptive names: bootstrap_from_github, install_harness, list_targets, and migrate_harness. The naming convention is uniform throughout, using snake_case and action-oriented verbs that accurately reflect each tool's function.
Four tools is well-scoped for a harness management server, covering the complete lifecycle: discovery (list_targets), creation (bootstrap_from_github), installation (install_harness), and migration (migrate_harness). Each tool earns its place without redundancy, making the set efficient and focused.
The tool surface provides complete coverage for harness management: it supports discovery of targets, creation from scratch or GitHub profiles, installation into tools, and migration between formats. There are no obvious gaps—agents can perform all essential operations without dead ends, including handling edge cases like paste-only sources.
Available Tools
4 toolsbootstrap_from_githubAInspect
Generate a starter harness from a user's public GitHub profile. Reads their repos, topics, commit style, and pinned projects, synthesizes a first-draft CLAUDE.md / AGENTS.md / rules file for the chosen target. The agent writes the returned files to the user's machine. No OAuth required; uses unauthenticated public-repo data. Optional gh_token arg increases GitHub rate limits.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target tool. Same 13 options as install_harness. | |
| gh_token | No | Optional GitHub PAT for higher rate limits. Send only if the user explicitly supplied one; don't ask for it by default. | |
| github_username | Yes | GitHub handle without the @, e.g. 'torvalds'. Must have public repos. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behaviors: it writes files to the user's machine (output action), requires no OAuth (authentication needs), uses public data only (data scope), and mentions rate limit implications with the optional token. It doesn't cover error handling or file overwrite behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states the core purpose, second explains the output action and authentication, third covers the optional parameter. Every sentence adds essential information with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, no annotations, and no output schema, the description is adequate but has gaps. It covers the main action and behavioral context well, but doesn't explain what the 13 'to' options are, what the output files contain, or how errors are handled. Given the complexity, it should provide more guidance on the target parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema: it clarifies that 'gh_token' increases rate limits and should only be used if explicitly supplied, but doesn't explain the 'to' parameter's 13 options or provide examples for 'github_username'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a starter harness'), the source ('from a user's public GitHub profile'), and the output ('CLAUSE.md / AGENTS.md / rules file'). It distinguishes this tool from siblings like 'install_harness' by focusing on creation from GitHub data rather than installation or listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to create starter files from GitHub data) and mentions an optional parameter for rate limits. However, it doesn't explicitly state when NOT to use it or compare it to alternatives like 'install_harness' for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
install_harnessAInspect
Generate the files needed to install a minimal BringYour harness into a specific AI tool. Returns a map of relative file paths → content. The calling agent should write these files to the appropriate directory on the user's machine (e.g., $HOME/.claude for Claude Code, .cursor/ at the project root for Cursor). Does NOT touch the caller's filesystem — the agent does that.
| Name | Required | Description | Default |
|---|---|---|---|
| target | Yes | Target tool. One of: claude-code, cursor, codex, openclaw, aider, continue, cline, goose, zed, roo-code, chatgpt, claude-ai, copilot. Call list_targets first if unsure. | |
| identity | No | Optional short self-description the user wants baked into the harness (e.g., 'Go backend engineer, prefers terse explanations, dislikes comments that explain what code does'). If empty, a neutral starter template is emitted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it generates files but 'Does NOT touch the caller's filesystem — the agent does that,' clarifies the output format ('map of relative file paths → content'), and explains the agent's responsibility for writing files. It doesn't cover rate limits or error handling, but provides essential operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by output details and behavioral constraints. Every sentence earns its place: first states what it does, second describes output format, third explains agent responsibility, and fourth clarifies what it doesn't do. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by explaining the output format and behavioral constraints. It covers the tool's purpose, usage context, and limitations. However, it could be more complete by mentioning error cases or prerequisites, though the reference to list_targets helps mitigate this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add meaning beyond what the schema provides about 'target' or 'identity' parameters. Baseline 3 is appropriate since the schema does the heavy lifting, though the description could have elaborated on parameter interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Generate the files needed to install a minimal BringYour harness') and resource ('into a specific AI tool'). It distinguishes from siblings by specifying it generates installation files rather than bootstrapping from GitHub, listing targets, or migrating harnesses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('install a minimal BringYour harness into a specific AI tool') and references list_targets for target selection. However, it doesn't explicitly state when NOT to use it or compare with alternatives like bootstrap_from_github for different installation methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_targetsAInspect
List the 13 AI tools BringYour can produce harness files for, with each target's read/write/paste capability and brief description. Call this first to discover what 'target' values install_harness accepts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by specifying the exact scope (13 AI tools), the structured information returned (capabilities and descriptions), and the purpose within the workflow. It doesn't mention potential limitations like rate limits or authentication requirements, but provides substantial behavioral context for a read-only discovery tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are perfectly front-loaded: the first describes exactly what the tool returns, the second provides crucial usage guidance. Every word earns its place with zero redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter discovery tool with no annotations and no output schema, the description provides excellent context about what information will be returned and how it fits into the broader workflow. The only minor gap is not explicitly describing the output format/structure, but given this is a simple list tool, the description is quite complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline would be 3. However, the description adds value by explicitly stating there are no parameters needed ('Call this first' implies no configuration required), which helps the agent understand this is a simple discovery call with no input requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('13 AI tools BringYour can produce harness files for'), with specific details about what information is included (target's read/write/paste capability and brief description). It explicitly distinguishes this tool from its sibling 'install_harness' by explaining the relationship between them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this first to discover what 'target' values install_harness accepts'), creating a clear workflow relationship with a specific sibling tool. It establishes a prerequisite relationship rather than just describing functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
migrate_harnessAInspect
Translate an existing harness from one tool's format to another's. The calling agent reads the user's source harness files locally (e.g. from ~/.claude/ or .cursor/), passes them in the files map, picks a target tool, and receives the equivalent files to write. Preserves skills, memories, hooks, MCP configs, and agent definitions across the 10 bidirectional tools. Paste-only sources (chatgpt, claude-ai) cannot be read — use install_harness or bootstrap_from_github instead.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target tool. Any of the 13 (bidirectional + paste + copilot). | |
| from | Yes | Source tool. One of the 10 bidirectional targets: claude-code, cursor, codex, openclaw, aider, continue, cline, goose, zed, roo-code. | |
| files | Yes | Map of {relative_path: content} capturing the source harness as the agent read it from the user's machine. Paths must be relative to the harness root (e.g. 'CLAUDE.md', 'skills/foo/SKILL.md'). Binary files are not supported. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the agent must read files locally and pass them in, the tool preserves specific components (skills, memories, hooks, MCP configs, agent definitions), binary files are not supported, and it works across 10 bidirectional tools. However, it doesn't mention error handling, performance characteristics, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. Each subsequent sentence adds necessary context about constraints and alternatives. While efficient, the final sentence about paste-only sources could be slightly more concise by integrating it with the earlier tool scope explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex migration tool with no annotations and no output schema, the description does well by explaining the transformation process, file handling requirements, tool compatibility constraints, and exclusions. It covers the essential context needed to understand what the tool does and when to use it, though it doesn't describe the output format or potential error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds some context about the 'files' parameter (agent reads from user's machine, paths relative to harness root) and clarifies the scope of 'from' and 'to' parameters (10 bidirectional tools vs. 13 total including paste-only). This provides marginal value beyond the schema, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Translate an existing harness'), identifies the resource ('from one tool's format to another's'), and distinguishes from siblings by mentioning paste-only sources cannot be read and directing to install_harness or bootstrap_from_github instead. It explicitly names the 10 bidirectional tools versus paste-only sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (for translating between the 10 bidirectional tools) and when not to use it (for paste-only sources like chatgpt, claude-ai). It names specific alternatives (install_harness, bootstrap_from_github) for those excluded cases, giving clear context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!