ateam-mcp
Server Details
Build, validate, and deploy multi-agent AI solutions from any AI environment.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ariekogan/ateam-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 33 of 33 tools scored. Lowest: 3.2/5.
The tool set has clear functional groupings (e.g., authentication, deployment, GitHub operations, testing), but there is significant overlap within groups that could cause confusion. For example, ateam_build_and_run, ateam_patch, ateam_redeploy, and ateam_upload_connector all handle deployment or updates with nuanced differences that may be hard for an agent to distinguish without careful reading of descriptions. Tools like ateam_test_skill and ateam_conversation both involve sending messages to skills, differing mainly in testing vs. production contexts.
Tool names follow a highly consistent verb_noun pattern with the 'ateam_' prefix throughout, such as ateam_auth, ateam_build_and_run, ateam_github_read, and ateam_test_skill. All names use snake_case uniformly, and verbs are descriptive and aligned with actions (e.g., get, list, delete, patch), making the set predictable and easy to parse.
With 33 tools, the count is excessive for a single server, leading to cognitive overload and potential redundancy. While the domain (A-Team platform management) is broad, many tools could be consolidated or parameterized (e.g., multiple GitHub operations or testing variants). This high number increases complexity and the risk of tool misselection, making it feel heavy and unwieldy for typical agent use.
The tool set provides comprehensive coverage for the A-Team platform domain, including authentication, solution lifecycle (create, read, update, delete), GitHub integration, testing, and administrative operations. There are no obvious gaps; tools support CRUD for solutions and connectors, deployment workflows, version control, and various testing scenarios, ensuring agents can handle end-to-end tasks without dead ends.
Available Tools
33 toolsateam_authAInspect
Authenticate with A-Team. Required before any tenant-aware operation (reading solutions, deploying, testing, etc.). The user can get their API key at https://mcp.ateam-ai.com/get-api-key. Only global endpoints (spec, examples, validate) work without auth. IMPORTANT: Even if environment variables (ADAS_API_KEY) are configured, you MUST call ateam_auth explicitly — env vars alone are not sufficient. For cross-tenant admin operations, use master_key instead of api_key.
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | Optional API URL override (e.g., https://dev-api.ateam-ai.com). Use this to target a different environment without restarting the MCP server. | |
| tenant | No | Tenant name (e.g., dev, main). Optional with api_key if format is adas_<tenant>_<hex>. REQUIRED with master_key. | |
| api_key | No | Your A-Team API key (e.g., adas_xxxxx) | |
| master_key | No | Master key for cross-tenant operations. Authenticates across ALL tenants without per-tenant API keys. Requires tenant parameter. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses that the tool establishes authentication state for subsequent calls, explains the relationship between api_key (tenant-specific) and master_key (cross-tenant), and clarifies that environment variables alone are insufficient. Minor gap: doesn't describe token expiration or explicit side effects beyond "authentication".
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five information-dense sentences with zero waste: purpose (1), requirement scope (2), credential source (3), exceptions (4), environment variable constraint (5), and cross-tenant alternative (6). Front-loaded with the core action, followed by operational context and specific constraints. Appropriate length for a critical prerequisite tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter authentication tool with 100% schema coverage but no output schema, the description is substantially complete. It covers credential acquisition (URL provided), operational prerequisites relative to siblings, environment variable interactions, and tenant scoping rules. Only minor gap is absence of return value description, though this is less critical for an auth initialization tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context beyond the schema: api_key format examples ("adas_xxxxx"), tenant format pattern ("adas_<tenant>_<hex>"), and the mutual exclusivity/relationship logic between api_key, master_key, and tenant parameters that isn't captured in isolated schema field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with "Authenticate with A-Team" (specific verb+resource) and immediately distinguishes this from siblings by stating it's "Required before any tenant-aware operation (reading solutions, deploying, testing, etc.)"—explicitly listing the operational sibling tools that depend on this auth step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ("Required before any tenant-aware operation"), when-not-needed ("Only global endpoints (spec, examples, validate) work without auth"), and alternatives ("For cross-tenant admin operations, use master_key instead of api_key"). Also includes critical exclusion: "Even if environment variables (ADAS_API_KEY) are configured, you MUST call ateam_auth explicitly".
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_bootstrapAInspect
REQUIRED onboarding entrypoint for A-Team MCP. MUST be called when user greets, says hi, asks what this is, asks for help, explores capabilities, or when MCP is first connected. Returns platform explanation, example solutions, and assistant behavior instructions. Do NOT improvise an introduction — call this tool instead.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses what the tool returns (platform explanation, instructions), but omits side effects, idempotency, prerequisites, or whether calling it multiple times is safe. Adequate but not rich for a 'bootstrap' entrypoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Highly structured with two dense sentences. Front-loaded with 'REQUIRED' to signal importance. First sentence covers purpose and triggers; second covers returns and prohibitions. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing return contents. Given zero parameters and no annotations, description provides sufficient context for an onboarding tool, though could note error conditions or retry behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters. Description correctly implies no user input is needed by focusing entirely on trigger conditions rather than parameters. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly defines this as the 'REQUIRED onboarding entrypoint' and distinguishes it from operational siblings (auth, build, delete, etc.) by specifying it returns 'platform explanation, example solutions, and assistant behavior instructions.' Clear verb+resource+scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger conditions ('when user greets, says hi, asks what this is...') and explicit prohibition ('Do NOT improvise an introduction — call this tool instead'). Covers both when-to-use and when-not-to-use with high specificity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_build_and_runAInspect
Build and deploy a governed AI Team solution in one step. ⚠️ HEAVIEST OPERATION (60-180s): validates solution+skills → deploys all connectors+skills to A-Team Core (regenerates MCP servers) → health-checks → optionally runs a warm test → auto-pushes to GitHub. AUTO-DETECTS GitHub repo: if you omit mcp_store and a repo exists, connector code is pulled from GitHub automatically. First deploy requires mcp_store. After that, write files via ateam_github_write, then just call build_and_run without mcp_store. For small changes to an already-deployed solution, prefer ateam_patch (faster, incremental). Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| github | No | Optional: if true, pull connector source code from the solution's GitHub repo. AUTO-DETECTED: if you omit both mcp_store and github, the system checks if a repo exists and pulls from it automatically. You rarely need to set this explicitly. | |
| skills | No | Optional after first deploy: skill definitions. If omitted, auto-pulled from GitHub repo (skills/{id}/skill.json). | |
| solution | No | Full solution definition. Required on first deploy. After first deploy, just pass solution_id instead — everything is auto-pulled from GitHub. | |
| mcp_store | No | Optional: connector source code files. Key = connector id, value = array of {path, content}. | |
| connectors | No | Optional: connector metadata (id, name, transport). Entry points auto-detected from mcp_store. | |
| solution_id | No | The solution ID. Use this INSTEAD of passing the full solution object — the solution definition is auto-pulled from GitHub. Required if solution object is omitted. | |
| test_message | No | Optional: send a test message after deployment to verify the skill works. Returns the full execution result. | |
| test_skill_id | No | Optional: which skill to test (defaults to the first skill). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fully discloses: duration (60-180s), destructive side effects ('regenerates MCP servers'), mutation behavior ('auto-pushes to GitHub'), auto-detection logic, and multi-step pipeline (validate→deploy→health-check→test). Carries full transparency burden effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense but zero waste. Front-loaded with critical weight warning ('⚠️ HEAVIEST OPERATION'), uses arrow notation for efficient step enumeration, and emoji draws attention to duration. Every clause earns place despite length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for complex 8-parameter deployment tool. Covers full pipeline (validation, deployment, regeneration, health checks, GitHub push), authentication needs, output behavior hint ('Returns the full execution result'), and workflow variations. No output schema present but return values sufficiently hinted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. Description adds valuable workflow semantics: mcp_store required only for first deploy, solution_id preferred over solution object post-deploy, and complex GitHub auto-detection interaction when omitting both mcp_store and github params. Adds meaningful usage context beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb+resource ('Build and deploy a governed AI Team solution') and distinguishes from sibling ateam_patch ('For small changes...prefer ateam_patch'). Also signals heavy weight vs lighter alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance ('First deploy requires mcp_store. After that...just call build_and_run without mcp_store'), alternative named ('prefer ateam_patch'), and prerequisite stated ('Requires authentication'). Clear when to use full deploy vs incremental patch.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_conversationAInspect
Send a message to a deployed solution and get the result. No skill_id needed — the system auto-routes to the right skill. Supports multi-turn conversations: pass the actor_id from a previous response to continue the thread (e.g., reply to a confirmation prompt). Each call creates a new job but the same actor_id maintains conversation context.
| Name | Required | Description | Default |
|---|---|---|---|
| wait | No | If true (default), wait for completion. If false, return job_id immediately for polling. | |
| message | Yes | The message to send (e.g., 'send email to X' or 'I confirm') | |
| actor_id | No | Optional: actor ID from a previous response to continue the conversation. Omit for a new conversation. | |
| timeout_ms | No | Optional: max wait time in ms (default: 60000, max: 300000). | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden and succeeds well: it explains the auto-routing behavior, that each call creates a new job (side effect), and that actor_id maintains conversation state across jobs. It clarifies the synchronous nature implied by 'get the result' (aligned with the wait parameter).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero redundancy. Front-loaded with core purpose, followed by routing logic, multi-turn mechanics, and job/context separation. Every sentence adds non-obvious information about behavior or usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a conversational tool with no output schema. It adequately covers the essential mechanics: routing, job creation, and state persistence. Minor gap in not describing error conditions or the specific structure of 'the result', but given the schema coverage and complexity level, it provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds significant semantic value: it provides concrete message examples ('send email to X', 'I confirm') and explains the actor_id parameter's purpose in the conversation lifecycle ('continue the thread', 'reply to a confirmation prompt'), helping the agent understand the stateful interaction pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action ('Send a message to a deployed solution') and outcome ('get the result'). It clearly distinguishes this from sibling tools by emphasizing the conversation/messaging pattern versus build, delete, or test operations evident in sibling names like ateam_build_and_run or ateam_delete_solution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use (messaging deployed solutions, multi-turn interactions) and explains the conversation continuation pattern via actor_id. The note about auto-routing ('No skill_id needed') implicitly guides against manual skill selection. Lacks explicit 'when not to use' or named alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_delete_connectorAInspect
Remove a connector from a deployed solution. Stops and deletes it from A-Team Core, removes references from the solution definition (grants, platform_connectors) and skill definitions (connectors array), and cleans up mcp-store files.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID (e.g. 'smart-home-assistant') | |
| connector_id | Yes | The connector ID to remove (e.g. 'device-mock-mcp') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels by disclosing cascade effects: it stops/deletes from Core, removes references from solution definitions and skill definitions, and cleans up mcp-store files. This comprehensively explains the destructive footprint and side effects beyond simple deletion.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, dense sentence efficiently enumerates four distinct cleanup actions (Core deletion, solution def updates, skill def updates, file cleanup). Every clause conveys necessary behavioral information with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for the destructive operation itself, detailing all affected resources. Minor gap: given no output schema and no annotations, it could mention return behavior (void/success message) or error conditions (e.g., if connector doesn't exist), but the core behavioral disclosure is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear examples ('smart-home-assistant', 'device-mock-mcp'). The description does not add parameter semantics beyond the schema, but this meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Remove a connector from a deployed solution,' providing a specific verb (Remove) and resource (connector). It clearly distinguishes from sibling ateam_delete_solution by specifying it operates on a connector 'from' a solution rather than deleting the solution itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description details the scope of changes (removing references from grants, platform_connectors, skill definitions) which implicitly guides usage, but lacks explicit when-to-use guidance versus alternatives like ateam_test_connector or ateam_delete_solution, and omits prerequisites or warnings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_delete_solutionAInspect
Delete a deployed solution and all its skills from A-Team. Use with caution — this removes the solution from both the Skill Builder and A-Team Core. Useful for cleaning up test solutions or starting fresh.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Explicitly discloses scope of destruction ('removes the solution from both the Skill Builder and A-Team Core'), cascading effects ('all its skills'), and danger level ('Use with caution'). Could be enhanced by stating irreversibility explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences front-loaded with the action. Each sentence earns its place: purpose, scope/warning, and use case guidance. No redundancy or waste despite covering behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single-parameter destructive operation with no output schema, the description adequately covers what gets destroyed and where. Slightly missing explicit confirmation about irreversibility and prerequisite checks, but sufficiently complete for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with 'solution_id' fully documented. Description mentions 'deployed solution' which maps to the parameter, but does not add format details, validation rules, or constraints beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Delete' with clear resource 'deployed solution and all its skills'. Explicitly distinguishes from sibling 'ateam_delete_connector' by targeting 'solution' rather than connector, and from other operations like redeploy or patch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance ('Useful for cleaning up test solutions or starting fresh') and negative caution ('Use with caution'). Missing explicit alternative suggestions (e.g., when to prefer redeploy over delete), but use cases are well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_get_connector_sourceAInspect
Read the source code files of a deployed MCP connector. Returns all files (server.js, package.json, etc.) stored in the mcp_store for this connector. Use this BEFORE patching or rewriting a connector — always read the current code first so you can make surgical fixes instead of blind full rewrites.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID (e.g. 'smart-home-assistant') | |
| connector_id | Yes | The connector ID to read (e.g. 'home-assistant-mcp') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the storage location ('mcp_store'), provides concrete file examples ('server.js, package.json'), and explains the critical workflow implication (prevents blind rewrites). Minor gap: doesn't specify return format (content vs. metadata) or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with purpose ('Read...'), followed by scope ('Returns...'), then critical workflow guidance ('Use this BEFORE...'). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description partially compensates by stating what is returned ('all files... server.js, package.json'). Covers the essential workflow context for a 2-parameter read tool. Could strengthen by clarifying if file contents or just paths are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions and examples for both solution_id and connector_id. Description adds minimal semantic value beyond the schema (baseline 3), though it does reference 'mcp_store' as the storage context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Read' + resource 'source code files of a deployed MCP connector' with clear scope (all files in mcp_store). Explicitly distinguishes from siblings like ateam_get_solution or ateam_github_read by specifying 'mcp_store' and 'connector source'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow instruction: 'Use this BEFORE patching or rewriting a connector.' Names the alternatives implicitly (patching/rewriting) and explains the value proposition ('surgical fixes instead of blind full rewrites').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_get_examplesBInspect
Get complete working examples that pass validation. Study these before building your own.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Example type: 'skill' = Order Support Agent, 'connector' = stdio MCP connector, 'connector-ui' = UI-capable connector, 'solution' = full 3-skill e-commerce solution, 'script-cache-skill' = fat-tool skill with script_cache opt-in (reference implementation of script-level JIT shortcuts — study this before building any browser-automation skill), 'index' = list all available examples |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden. It successfully notes examples 'pass validation' (behavioral quality trait), but omits critical details like return format (files? JSON? code?), side effects, or idempotency. Adequate but thin.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. First establishes function and quality; second gives actionable guidance. Front-loaded with action verb. Could be slightly more information-dense regarding outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple single-parameter structure with complete schema coverage and no output schema, description adequately covers intent. However, for a retrieval tool with no output schema, it should specify what gets returned (file paths, code strings, etc.)—this gap leaves it incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with detailed enum mappings (e.g., 'skill = Order Support Agent'); therefore baseline 3 is appropriate. Description adds no parameter-specific guidance beyond what schema provides, but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'working examples' and adds quality context 'that pass validation.' It loosely distinguishes from siblings like ateam_get_solution by framing these as study materials rather than operational artifacts, though lacks explicit contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage timing ('Study these before building your own'), suggesting these are reference materials for development. However, lacks explicit when-not-to-use guidance, prerequisites, or named alternatives (e.g., when to use ateam_get_spec instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_get_solutionBInspect
Read solution state — definition, skills, health, status, or export. Use this to inspect deployed solutions.
| Name | Required | Description | Default |
|---|---|---|---|
| view | Yes | What to read: 'definition' = full solution def, 'skills' = list skills, 'health' = live health check, 'status' = deploy status, 'export' = exportable bundle, 'validate' = re-validate from stored state, 'connectors_health' = connector status | |
| skill_id | No | Optional: read a specific skill by ID (original or internal) | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It enumerates the view modes which hints at return variations, but omits critical behavioral details: does not confirm read-only safety, does not describe error responses (e.g., if solution_id not found), nor authentication requirements. Just above minimum as it does map view options.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately concise two-sentence structure. Action and resource are front-loaded in first sentence; usage intent follows in second. No filler text, though em-dash list is slightly compressed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 3-parameter read operation with complete schema documentation. Missing output schema description (would benefit from knowing return structure) and safety guarantees given lack of readOnlyHint annotation. Acceptable but not exemplary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description echoes the view enum values ('definition, skills, health, status, or export') without adding syntax constraints or validation rules beyond the schema. No additional context for optional 'skill_id' parameter (e.g., when to use it) beyond schema's 'Optional' label.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Read' and resource 'solution state' with specific facets (definition, skills, health, status, export). Distinguishes from sibling 'ateam_list_solutions' by implying singular instance inspection vs. listing, and from mutation tools like 'ateam_delete_solution' or 'ateam_patch' via 'inspect'. Lacks explicit contrast with list operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides basic usage context with 'Use this to inspect deployed solutions', indicating when to use (examining existing deployments). However, lacks explicit when-NOT-to-use guidance and does not mention alternatives like 'ateam_list_solutions' for discovery without a specific ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_get_specAInspect
Get the A-Team specification — schemas, validation rules, system tools, agent guides, and templates. Start here after bootstrap to understand how to build skills and solutions. Use 'section' to get just one part of the skill spec (much smaller than the full spec). Use 'search' to find specific fields or concepts across the spec.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | What to fetch: 'overview' = API overview + endpoints, 'skill' = full skill spec, 'solution' = full solution spec, 'enums' = all enum values, 'connector-multi-user' = multi-user connector guide | |
| search | No | Optional: filter the spec to only sections containing this search term. Works with any topic. Example: search='bootstrap' returns only fields/sections mentioning 'bootstrap'. | |
| section | No | Optional: get just one section of the skill spec (only works with topic='skill'). Sections: 'engine' = model/reasoning/planner optimization/bootstrap tools, 'tools' = tool definitions/meta tools, 'intents' = intents/problem/scenarios, 'policy' = access control/grants/workflows, 'triggers' = automation triggers, 'connectors' = connector linking/channels, 'role' = persona/goals, 'template' = minimal quick start, 'guide' = build steps/common mistakes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Provides useful payload size context ('much smaller than the full spec'). Missing explicit safety disclosure (read-only status), auth requirements, or rate limiting despite being a likely safe read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: front-loaded purpose, sequenced usage guidance, and parameter hints. Every clause earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 well-documented parameters and no output schema, description adequately covers scope, content categories, and operational sequencing. Could improve by describing return structure or explicitly contrasting with other ateam_get_* siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed enum descriptions in the schema, establishing baseline 3. Description reinforces usage patterns ('Use section to get...') but does not add semantic detail beyond what's already specified in schema property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + specific resource ('A-Team specification') with enumerated contents (schemas, validation rules, etc.). Distinguishes from sibling ateam_bootstrap by stating 'Start here after bootstrap', establishing clear sequencing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Start here after bootstrap') and provides internal usage patterns (use 'section' for partial results, 'search' for filtering). Lacks explicit comparison to other getter siblings like ateam_get_solution or ateam_get_examples for when NOT to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_get_workflowsAInspect
Get the builder workflows — step-by-step state machines for building skills and solutions. Use this to guide users through the entire build process conversationally. Returns phases, what to ask, what to build, exit criteria, and tips for each stage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Compensates well by detailing return structure (phases, what to ask, exit criteria, tips) since no output schema exists. Does not explicitly declare read-only nature or performance characteristics, but 'Get' prefix and detailed return description provide sufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence defines the resource and its nature; second sentence gives usage context and return value details. Front-loaded with the essential operation and resource type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully complete for a zero-parameter tool lacking annotations and output schema. Description adequately explains both the invocation trigger and the return payload structure (compensating for missing output_schema), requiring no additional detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present. Per rubric, zero-parameter tools baseline at 4. No additional semantic elaboration needed or possible, and none is attempted unnecessarily.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' + resource 'builder workflows' + domain context 'for building skills and solutions'. The phrase 'step-by-step state machines' distinguishes this from sibling tools like ateam_get_solution or ateam_get_spec which retrieve specific artifacts rather than process workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive guidance 'Use this to guide users through the entire build process conversationally', clearly indicating when to invoke the tool. Lacks explicit 'when-not' guidance or named alternative tools for specific sub-tasks, but the conversational guidance use case is unambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_list_versionsAInspect
List all available checkpoints (safe-* tags) for a solution. Shows tag name, date, counter, and commit SHA. Use before rollback to see available safe points.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses output fields returned (tag name, date, counter, commit SHA) which compensates somewhat for missing output schema. However, lacks explicit safety profile (read-only vs destructive) and error behavior (e.g., invalid solution_id) that annotations would normally cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: action+resource, output fields, usage guidance. Front-loaded with core purpose, zero redundancy, each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without annotations or output schema, description adequately covers purpose, return data structure, and usage context. Could strengthen with error case mention, but sufficient for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'The solution ID' description. Description contextualizes as 'for a solution' but adds no additional format constraints, validation rules, or examples beyond schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'List' with resource 'checkpoints (safe-* tags)' and scope 'for a solution'. Distinguishes from sibling git tools by specifying 'safe-* tags' pattern and checkpoint-specific semantics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use before rollback to see available safe points' - provides clear temporal guidance (when) and purpose (why), implicitly referencing the ateam_github_rollback sibling workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_logAInspect
View commit history for a solution's GitHub repo. Shows recent commits with messages, SHAs, timestamps, and links. Use this to see what changes have been made and when.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max commits to return (default: 10) | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return content (messages, SHAs, timestamps, links) but omits explicit safety declarations (read-only, no side effects), rate limits, or pagination behavior that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero waste: purpose declaration, output specification, and usage guidance. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema and lack of output schema, the description compensates adequately by detailing what fields are returned (messages, SHAs, timestamps, links). For a read-only history tool, this is sufficiently complete, though explicit safety confirmation would strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema adequately documents both solution_id and limit. The description implies the limit concept with 'recent commits' but does not add syntax details, format constraints, or semantic meaning beyond the schema definitions, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'View' and resource 'commit history for a solution's GitHub repo', clearly distinguishing it from siblings like ateam_github_read (file contents), ateam_github_write (modifications), and ateam_github_list_versions (tags/releases).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Use this to see what changes have been made and when' provides clear context for when to invoke the tool (audit/trail inspection). However, it lacks explicit alternatives or exclusions (e.g., 'use github_read to view file contents instead of history').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_patchAInspect
Edit a file in the solution's GitHub repo and commit. Two modes:
FULL FILE: provide
content— replaces entire file (good for new files or small files)SEARCH/REPLACE: provide
search+replace— surgical edit without sending full file (preferred for large files like server.js) Always use search/replace for large files (>5KB). Always read the file first with ateam_github_read to get the exact text to search for.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File path to create/update (e.g. 'connectors/home-assistant-mcp/server.js') | |
| search | No | Exact text to find in the file (mode 2 — search/replace). Must match exactly including whitespace. | |
| content | No | The full file content to write (mode 1 — full file replacement) | |
| message | No | Optional commit message (default: 'Update <path>') | |
| replace | No | Text to replace the search string with (mode 2 — required with search) | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It effectively explains the two distinct behavioral modes (destructive full replacement vs surgical search/replace), mutation side effects (commit), and size constraints. Minor gap: does not specify error behavior if search text is not found or if file is locked.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear numbered modes, backtick parameter references, and zero filler. Every sentence provides actionable guidance (modes, size threshold, prerequisite workflow). Efficiently organized for quick parsing by an agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the operational contract by explaining the commit action and dual-mode logic. Minor deduction for not describing failure modes or return confirmation details, though 'commit' implies the transactional boundary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds significant semantic value by explaining parameter relationships (search+replace form mode 2, content forms mode 1) and operational context for solution_id (ties it to 'solution's GitHub repo'). Could add detail on message parameter defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Edit a file... and commit), resource (solution's GitHub repo), and dual-mode scope (FULL FILE vs SEARCH/REPLACE). The mention of 'solution's GitHub repo' contextualizes the operation within the specific domain, distinguishing it from generic file editing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance for each mode (full file for 'new files or small files', search/replace for 'large files >5KB'), includes a concrete threshold (>5KB), and explicitly names prerequisite sibling tool ateam_github_read with the instruction 'Always read the file first with ateam_github_read'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_promoteAInspect
Create a checkpoint (safe point) on the current main branch. Tags the current state with safe-YYYY-MM-DD-NNN so you can rollback to it later. Use this before risky changes or when the solution is in a known-good state.
| Name | Required | Description | Default |
|---|---|---|---|
| label | No | Optional: human-readable label for this checkpoint (e.g., 'before refactor', 'v2 stable') | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable behavioral context: tag naming convention (safe-YYYY-MM-DD-NNN), scope limitation (main branch only), and persistence (can rollback later). Missing execution details like atomicity, remote push behavior, or conflict handling if tag exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with action (create checkpoint), followed by mechanism (tag format), followed by usage guidance (when to use). Every sentence earns its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong for a mutation tool with no output schema. Covers 'what', 'how' (tag format), and 'when'. Minor gap: doesn't describe return value (tag name? success boolean?) or branch protection requirements. Sibling context (github_rollback) is implied but not explicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. Description doesn't explicitly map parameters to behavior (e.g., doesn't clarify if 'label' affects the tag name or is separate metadata), but with complete schema documentation, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Create' + resource 'checkpoint/safe point' + scope 'current main branch'. Clarifies mechanism ('Tags with safe-YYYY-MM-DD-NNN') and distinguishes from siblings like github_write (tagging vs file writing) and github_rollback (creating checkpoint vs reverting to one).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance: 'Use this before risky changes or when the solution is in a known-good state.' Establishes relationship to rollback workflow ('so you can rollback to it later'), though it could explicitly name ateam_github_rollback as the complementary tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_pullAInspect
Deploy a solution FROM its GitHub repo. Reads .ateam/export.json + connector source from the repo and feeds it into the deploy pipeline. Use this to restore a previous version or deploy from GitHub as the source of truth.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID to pull and deploy from GitHub |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the mechanism (reads .ateam/export.json + connector source, feeds deploy pipeline) and implies destructive overwrite via 'restore', but lacks explicit safety warnings about overwriting current state, required permissions, or error conditions (e.g., if repo is missing export.json).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines action and mechanism, second defines use cases. Information is front-loaded with the core verb 'Deploy' and directional qualifier 'FROM' appearing immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of GitHub-integrated deployment and lack of output schema, the description adequately covers the essential behavioral contract (input source, processing pipeline, intent). Minor gap: does not describe return value or deployment confirmation behavior, which would help given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description ('The solution ID to pull and deploy from GitHub'), establishing baseline 3. The main description adds implicit context that this ID references the GitHub repository state rather than local state, but does not add syntax details or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Deploy a solution FROM its GitHub repo' uses a concrete verb (deploy), identifies the resource (solution), and the directional preposition 'FROM' clearly distinguishes this from sibling 'ateam_github_push' and local deployment tools like 'ateam_redeploy'. The mechanism details (.ateam/export.json) further clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios ('restore a previous version', 'deploy from GitHub as the source of truth') that clearly indicate this is for GitHub-centric workflows. However, it does not explicitly name alternatives like ateam_redeploy for local changes or ateam_github_rollback for specific version targeting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_pushAInspect
Push the current deployed solution to GitHub. Auto-creates the repo on first use. Commits the full bundle (solution + skills + connector source) atomically. Use after ateam_build_and_run to version your solution, or anytime you want to snapshot the current state.
| Name | Required | Description | Default |
|---|---|---|---|
| message | No | Optional commit message (default: 'Deploy <solution_id>') | |
| solution_id | Yes | The solution ID (e.g. 'smart-home-assistant') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses important behavioral traits: auto-creates repo on first use (side effect), commits atomically (transactional safety), and scope of commit (full bundle vs. partial). Deducted one point for not mentioning failure modes, idempotency beyond repo creation, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste. Front-loaded with core action ('Push...'), followed by side effects, behavioral details, and usage guidelines. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, description adequately covers tool behavior, side effects, and workflow context. Sufficient for correct agent invocation, though could be strengthened by describing return values or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline of 3. Description does not explicitly discuss parameter syntax or formats beyond what schema provides, though it contextualizes solution_id by explaining what the 'full bundle' includes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Push' + resource 'deployed solution to GitHub' clearly stated. Explicitly distinguishes from siblings like ateam_github_pull or ateam_github_write by specifying it commits the 'full bundle (solution + skills + connector source)' atomically, not just individual files.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use after ateam_build_and_run to version your solution' and provides alternative usage scenario 'or anytime you want to snapshot the current state.' Clear workflow integration guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_readAInspect
Read any file from a solution's GitHub repo. Returns the file content. Use this to read connector source code, skill definitions, or any versioned file. Great for reviewing previous versions or understanding what's in the repo.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File path in the repo (e.g. 'connectors/home-assistant-mcp/server.js', 'solution.json', 'skills/order-support/skill.json') | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only behavioral disclosure is 'Returns the file content'. Missing: error handling (what if file doesn't exist?), authentication requirements, return format (string vs object), or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: first defines action/return value, second specifies use cases, third provides workflow context ('reviewing previous versions'). No redundancy or filler. Well front-loaded with core action in first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description must explain return values—it mentions 'file content' but omits format details (encoding, structure). No annotations cover safety/destructive hints. Adequate for a read operation but would benefit from error case documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed examples in path description (e.g., 'connectors/home-assistant-mcp/server.js'). Description reinforces intent by listing file types ('connector source code, skill definitions') but doesn't add syntax or format details beyond schema. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Read' + resource 'file from a solution's GitHub repo' clearly defines scope. Explicitly mentions 'versioned file' and 'reviewing previous versions', distinguishing it from siblings like ateam_get_connector_source (likely for live source) and ateam_github_write/patch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context: 'Use this to read connector source code, skill definitions' and 'Great for reviewing previous versions'. However, lacks explicit 'when not to use' guidance or named alternatives (e.g., doesn't mention ateam_get_connector_source for live code).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_rollbackAInspect
Rollback main branch to a previous checkpoint (safe-* tag). Resets main to the specified checkpoint commit. ⚠️ DESTRUCTIVE — use with caution. Use ateam_github_list_versions to find available checkpoints first.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | Yes | Required: checkpoint tag to rollback to (e.g., 'safe-2026-03-11-001') | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Explicitly discloses '⚠️ DESTRUCTIVE' nature and specific mechanism ('Resets main to the specified checkpoint commit'). Could enhance with specifics on reversibility, branch protection behavior, or git implementation details, but covers critical safety profile adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action definition, technical mechanism, and warning/prerequisite. Front-loaded with primary purpose. Warning emoji and explicit capitalization effectively signal critical behavioral trait without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a destructive git operation despite missing annotations. Covers prerequisite workflow, safety warnings, and tag conventions. Minor gap: no output schema provided and description doesn't indicate success/failure indicators or side effects on other branches.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic context for 'tag' parameter by specifying expected 'safe-*' naming convention pattern, complementing the schema's example. Solution_id semantics rely on schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (rollback) + resource (main branch) + scope (to previous checkpoint). Distinguishes from siblings by referencing specific 'safe-*' tag pattern and explicitly naming prerequisite sibling tool ateam_github_list_versions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit prerequisite workflow: 'Use ateam_github_list_versions to find available checkpoints first.' Clear warning indicates when to exercise caution. Effectively distinguishes from other git tools (push/pull/patch) by emphasizing destructive rollback nature.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_statusAInspect
Check if a solution has a GitHub repo, its URL, and the latest commit. Use this to verify GitHub integration is working for a solution.
| Name | Required | Description | Default |
|---|---|---|---|
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It effectively lists returned data points (existence, URL, commit) but omits safety properties (read-only nature), error behavior when repo doesn't exist, or idempotency. 'Check' implies read-only, but explicit safety disclosure is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first defines functionality, second specifies use case. Front-loaded with essential information and no filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple single-parameter input and lack of output schema, description adequately compensates by listing specific return values (URL, latest commit, existence check). Sufficient for low-complexity tool, though response structure/format remains unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (solution_id described as 'The solution ID'), establishing baseline 3. Description mentions 'solution' in context but does not add semantic depth, validation rules, or format details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resources 'GitHub repo, its URL, and the latest commit'. Effectively distinguishes from sibling GitHub tools (read, write, push, pull) by focusing on existence verification and metadata retrieval rather than content manipulation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use case 'Use this to verify GitHub integration is working for a solution', giving clear context for when to invoke. Lacks explicit exclusions or named alternatives (e.g., 'use ateam_github_read for file contents'), but the verification context is specific enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_github_writeAInspect
Write a file to the solution's GitHub repo. Use this to create new connector files or replace existing ones — one file per call. This is the PRIMARY way to write connector code after first deploy. Write each file individually (server.js, package.json, UI assets), then call ateam_build_and_run() to deploy.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File path to write (e.g. 'connectors/my-mcp/server.js', 'connectors/my-mcp/package.json') | |
| content | Yes | The full file content | |
| message | No | Optional commit message (default: 'Write <path>') | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries moderate burden adequately. Discloses destructive potential ('replace existing ones'), rate limiting ('one file per call'), and deployment separation (write doesn't auto-deploy). However, lacks auth requirements, commit behavior details, or overwrite warnings typical for write operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: opens with core action, follows with scope/use-case, then primary status/timing, closes with workflow next-step. Every sentence earns its place; examples are efficiently integrated without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 4-param write tool: covers purpose, constraints, workflow integration, and sibling coordination. Absent output schema means description needn't explain returns. Minor gap: could clarify commit behavior or branch handling given GitHub context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage establishing baseline 3. Description adds value with concrete path examples ('connectors/my-mcp/server.js', 'server.js, package.json, UI assets') contextualizing expected path formats and file types, exceeding baseline documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Write a file to the solution's GitHub repo' provides clear verb+resource, distinguishes from siblings by noting it's for 'connector files' and 'after first deploy' (implying different tools for initial setup), and scopes with 'one file per call' constraint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Outstanding guidance: explicitly states 'PRIMARY way to write connector code after first deploy' establishing preference over alternatives, names specific successor tool 'ateam_build_and_run() to deploy' clarifying workflow sequence, and clarifies dual use case 'create new... or replace existing'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_list_solutionsBInspect
List all solutions deployed in the Skill Builder.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. While it specifies the scope ('deployed in the Skill Builder'), it lacks critical behavioral details such as pagination behavior, output format, or performance characteristics of listing all solutions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with 9 words. It is front-loaded with the action and contains no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description adequately covers the basic purpose, but is incomplete regarding what data is returned (no output schema exists to compensate). It meets minimum viability but leaves gaps in operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, which per the rubric establishes a baseline score of 4. There are no parameters to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with resource 'solutions' and scope 'deployed in the Skill Builder', clearly indicating what the tool does. However, it does not explicitly differentiate from sibling tool 'ateam_get_solution' (singular vs plural).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'ateam_get_solution', nor does it mention prerequisites or filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_patchAInspect
Surgically update ANY field in a skill or solution definition, redeploy, and optionally re-test — all in one step.
SUPPORTED OPERATIONS:
Scalar (dot notation): { "problem.statement": "new value", "role.persona": "You are..." }
Deep nested: { "intents.thresholds.accept": 0.9, "policy.escalation.enabled": true }
Array push: { "tools_push": [{ name: "new_tool", description: "..." }] }
Array delete: { "tools_delete": ["tool_name"] }
Array update: { "tools_update": [{ name: "existing_tool", description: "updated" }] }
Replace whole section: { "role": { persona: "...", goals: [...] } }
EXAMPLES:
Change persona: updates: { "role.persona": "You are a friendly assistant" }
Add a guardrail: updates: { "policy.guardrails.never_push": ["Never share passwords"] }
Update problem: updates: { "problem.statement": "...", "problem.goals": ["goal1"] }
Add a tool: updates: { "tools_push": [{ name: "conn.tool", description: "...", inputs: [...], output: {...} }] }
Change intent: updates: { "intents.supported_update": [{ id: "i1", description: "new desc" }] }
Force redeploy: updates: { "_force_redeploy": true }
CREATE a new skill: target='skill', skill_id='my-new-skill', updates: { "problem.statement": "...", "role.persona": "..." } If the skill doesn't exist yet, a default scaffold is created and the updates are applied on top. The skill is automatically added to the solution topology.
Use target='skill' + skill_id for skill fields. Use target='solution' for solution-level fields (linked_skills, platform_connectors, ui_plugins).
| Name | Required | Description | Default |
|---|---|---|---|
| target | Yes | What to update: 'solution' for solution definition, 'skill' for skill definition fields (problem, role, intents, tools, policy, engine, scenarios, etc.) | |
| updates | Yes | The update payload. Use dot notation for nested scalars (e.g. 'problem.statement': 'new value'). For arrays, use _push/_delete/_update suffixes (e.g. 'tools_push', 'tools_delete'). You can update ANY field in the skill definition: problem, role, intents, tools, policy, engine, scenarios, glossary, etc. | |
| skill_id | No | Required when target is 'skill'. The skill ID to patch. | |
| solution_id | Yes | The solution ID | |
| test_message | No | Optional: re-test the skill after patching. Requires skill_id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully explains the composite behavior (patch + redeploy + optional retest) and mutation patterns (array push/delete/update). However, it omits safety-critical context like reversibility, partial failure handling (what if redeploy fails after patch?), or required permissions for destructive operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear headers (SUPPORTED OPERATIONS, EXAMPLES) that support skimming. Every section delivers unique value—operations define syntax, examples demonstrate valid values. Slightly verbose but information-dense; the front-loaded summary sentence immediately communicates intent before diving into complex operation details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a complex mutation tool with nested object parameters. Covers the six supported operation types comprehensively. No output schema exists, and the description appropriately focuses on input semantics rather than return values. Missing only safety/risk documentation that would be expected for a destructive 'surgical' modification tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), the description adds substantial semantic value beyond the schema. It details the dot-notation syntax for the 'updates' parameter, documents array operation suffixes (_push/_delete/_update), enumerates patchable fields (problem, role, policy, etc.), and provides concrete value examples that clarify the expected structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verbs ('Surgically update', 'redeploy', 're-test') and clearly identifies the resource (skill or solution definition). It distinguishes from sibling tool ateam_redeploy by emphasizing the atomic 'all in one step' nature, implying ateam_redeploy is for redeployment only without patching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on the target parameter ('Use target=skill + skill_id for skill fields... target=solution for solution-level fields'). The 'all in one step' phrasing distinguishes this from using separate tools for update vs redeploy. Lacks explicit 'when-not-to-use' warnings (e.g., 'don't use for GitHub file patches' vs sibling ateam_github_patch).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_redeployAInspect
Re-deploy skills WITHOUT changing any definitions. ⚠️ HEAVY OPERATION: regenerates MCP servers (Python code) for every skill, pushes each to A-Team Core, restarts connectors, and verifies tool discovery. Takes 30-120s depending on skill count. Use after connector restarts, Core hiccups, or stale state. For incremental changes, prefer ateam_patch (which updates + redeploys in one step).
| Name | Required | Description | Default |
|---|---|---|---|
| skill_id | No | Optional: redeploy a single skill only. Omit to redeploy ALL skills in the solution. | |
| solution_id | Yes | The solution ID to redeploy |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Outstanding given no annotations: Discloses multi-step internals ('regenerates MCP servers... pushes... restarts... verifies'), performance characteristics ('30-120s'), and operational weight ('HEAVY OPERATION'). Agent understands exact cost and side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured: opening constraint (no definition changes), warning symbol with heavy operation details, timing estimate, usage conditions, and alternative—all in compact form. Every sentence conveys unique operational intelligence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete despite no output schema: Description adequately covers the 'heavy operation' nature, prerequisite conditions (stale state), and sibling relationships. With rich schema coverage and behavioral disclosure, nothing essential is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both params fully documented), establishing baseline 3. Description adds value by contextualizing 'skill count' (relating to optional skill_id parameter) and implying the scope difference between single-skill and full-solution deployment in the timing warning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: 'Re-deploy skills WITHOUT changing any definitions' provides specific verb (re-deploy), resource (skills), and scope constraint. Explicitly distinguishes from sibling 'ateam_patch' by contrasting with 'incremental changes' use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Exemplary: Explicitly states when to use ('after connector restarts, Core hiccups, or stale state') and explicitly names alternative ('prefer ateam_patch'). Clear guidance on heavy vs. light operational paths.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_status_allAInspect
Show GitHub sync status for ALL tenants and solutions in one call. Requires master key authentication. Returns a summary table of every tenant's solutions with their GitHub sync state.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and successfully communicates authentication requirements ('master key') and return format ('summary table'). It implies read-only safety through the verb 'Show,' though explicit confirmation of idempotency would strengthen it further.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: scope and action (sentence 1), authentication constraint (sentence 2), and return value (sentence 3). Information is front-loaded with the core purpose stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately compensates by disclosing the authentication barrier and return format. It sufficiently covers a simple read-only status tool, though enumerating possible sync state values would achieve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately requires no additional parameter clarification since the tool operates as a parameterless bulk status request.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Show GitHub sync status'), the resource (tenants and solutions), and distinguishes itself from siblings like 'ateam_github_status' by emphasizing the 'ALL' scope and 'in one call' bulk operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage context through 'Requires master key authentication,' establishing a clear prerequisite. It implies bulk usage via 'ALL tenants,' though it could be strengthened by explicitly naming the alternative tool for single-tenant queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_sync_allAInspect
Sync ALL tenants: push Builder FS → GitHub, then pull GitHub → Core MongoDB. Requires master key authentication. Returns a summary table with results for each tenant/solution.
| Name | Required | Description | Default |
|---|---|---|---|
| pull_only | No | Only pull from GitHub to Core (skip push). Default: false (full sync). | |
| push_only | No | Only push to GitHub (skip pull to Core). Default: false (full sync). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Successfully documents authentication requirements and return format ('summary table with results'). Omits error handling behavior (e.g., partial tenant failures) and idempotency characteristics that would be critical for a bulk operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three compact sentences with zero redundancy. The main action and scope are front-loaded ('Sync ALL tenants:'), followed by authentication constraints and return value. Every sentence provides distinct operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers multi-system scope, authentication, and return values despite lacking annotations or output schema. Given the high complexity (affecting all tenants across three systems) and lack of structured safety hints, the description provides sufficient but not exhaustive operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% description coverage, the description adds essential semantic context by mapping parameters to the conceptual sync flow—explaining that pull_only skips 'push Builder FS → GitHub' and push_only skips 'pull GitHub → Core MongoDB.' This clarifies the parameter purposes beyond the boolean definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies exact verb ('Sync'), target ('ALL tenants'), and bidirectional data flow ('Builder FS → GitHub → Core MongoDB'). Clearly distinguishes from unidirectional siblings like ateam_github_push by describing the complete two-stage pipeline.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite 'Requires master key authentication,' providing clear access control context. However, lacks explicit guidance on when to use this combined tool versus individual ateam_github_push or ateam_github_pull operations for single-direction syncs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_abortAInspect
Abort a running skill test. Stops the job execution at the next iteration boundary. (Advanced.)
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | The job ID to abort | |
| skill_id | Yes | The skill ID | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the graceful stop mechanism ('next iteration boundary'), but omits safety-critical details like whether the abort is reversible, side effects on the solution/skill, permission requirements, or what happens to partial results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-part structure is front-loaded with the primary action, followed by behavioral mechanics and complexity tagging. It is appropriately brief with minimal waste, though the parenthetical '(Advanced.)' could be better integrated into a complete sentence for slightly better flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter destructive operation with no output schema, the description provides the essential functional context but leaves gaps regarding prerequisites (how to discover the job_id), success/failure indicators, and consequences of aborting. It meets minimum viability but lacks richness for safe operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'job execution' which loosely maps to the job_id parameter, but adds no specific semantic guidance about where to obtain these IDs, their format, or relationships between solution_id, skill_id, and job_id beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action (Abort) and the specific resource type (running skill test), effectively distinguishing it from sibling tools like ateam_test_pipeline, ateam_test_connector, and ateam_test_voice. The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The '(Advanced.)' tag provides minimal guidance about expertise requirements, and the 'next iteration boundary' detail implies graceful termination behavior. However, there are no explicit when-to-use/when-not-to-use rules, prerequisites (e.g., checking status first), or alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_connectorAInspect
Call a tool on a running connector and get the result. Use this to test individual connector tools (e.g., triggers.list, entities.list, google.command) without deploying to a client. The connector must be connected and running.
| Name | Required | Description | Default |
|---|---|---|---|
| args | No | Optional: arguments to pass to the tool | |
| tool | Yes | The tool name to call (e.g., 'triggers.list', 'entities.list', 'google.devices') | |
| solution_id | Yes | The solution ID | |
| connector_id | Yes | The connector ID (e.g., 'home-assistant-mcp', 'google-home-mcp') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the prerequisite that the connector must be running and implies synchronous execution ('get the result'), but omits safety profile (read-only vs. destructive), error handling, or side effects of calling arbitrary sub-tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences: first defines core action, second provides use case with examples, third states prerequisites. No redundancy, well front-loaded with the essential verb-object pattern.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given moderate complexity (4 params, nested args object) and lack of output schema, the description adequately covers intent, prerequisites, and examples. Could be improved by noting error states (e.g., connector not running) or return structure, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description repeats examples already present in schema property descriptions (e.g., 'triggers.list') without adding new semantic details about parameter formats, validation rules, or nested args structure beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Call a tool on a running connector'), the resource type (individual connector tools), and distinguishes from siblings by emphasizing 'without deploying to a client'—contrasting with ateam_build_and_run or deployment tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisites ('The connector must be connected and running') and context ('Use this to test... without deploying to a client'). Lacks an explicit named alternative, though 'without deploying' implies when to prefer this over deployment tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_pipelineAInspect
Test the decision pipeline (intent detection → planning) for a skill WITHOUT executing tools. Returns intent classification, first planned action, and timing. Use this to debug why a skill classifies intent incorrectly or plans the wrong action.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | The test message to classify and plan for | |
| skill_id | Yes | The skill ID to test | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates that no tools are executed (safety), what data is returned ('intent classification, first planned action, and timing'), and the debugging purpose. It could be improved by noting any side effects like logging or state persistence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences. The first sentence front-loads the core functionality and key constraint (non-execution), while the second sentence immediately follows with the specific use case. There is zero redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), the description adequately covers the essential information by specifying what the tool returns even without a formal output schema. It is complete enough for an agent to select and invoke correctly, though explicit output format details would achieve a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across all three parameters, the schema already fully documents the inputs. The description references 'skill' and 'message' conceptually but does not add syntax details, constraints, or examples beyond what the schema provides, making 3 the appropriate baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states exactly what the tool does using specific verbs ('Test') and resources ('decision pipeline'), including the specific subprocesses covered ('intent detection → planning'). It effectively distinguishes itself from execution-focused siblings like `ateam_test_skill` by explicitly stating it runs 'WITHOUT executing tools'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use the tool ('Use this to debug why a skill classifies intent incorrectly or plans the wrong action'). This implies the appropriate troubleshooting context, though it could be strengthened by explicitly contrasting with `ateam_test_skill` for cases requiring full execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_skillAInspect
Send a test message to a deployed skill and get the full execution result. By default waits for completion (up to 60s). Set wait=false for async mode — returns job_id immediately, then poll with ateam_test_status.
| Name | Required | Description | Default |
|---|---|---|---|
| wait | No | If true (default), wait for completion. If false, return job_id immediately for polling via ateam_test_status. | |
| message | Yes | The test message to send to the skill | |
| actor_id | No | Optional actor ID for conversation continuity. Pass the actor_id from a previous test response to continue the conversation. Omit to auto-generate a test actor (test_<timestamp>_<random>, auto-expires in 24h). | |
| skill_id | Yes | The skill ID to test (original or internal ID) | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the 60s timeout and async behavior (returns job_id), but lacks details on error handling, execution result structure, or rate limiting that would be expected for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and default behavior; second sentence explains async alternative and polling mechanism. Perfectly front-loaded and dense with actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description covers the execution lifecycle well (sync result vs. async job_id) and references the polling sibling. Only minor gap is lack of detail on execution result structure or error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds valuable workflow semantics for the 'wait' parameter by explaining the async/polling pattern and timeout behavior, exceeding the schema's basic boolean description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Send' with resource 'deployed skill' and outcome 'get the full execution result'. It distinguishes from sibling testing tools (ateam_test_connector, ateam_test_pipeline, etc.) by specifying 'skill', and explicitly references ateam_test_status for the async polling workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clearly defines when to use async mode ('Set wait=false for async mode') and explicitly names the sibling tool ateam_test_status for polling, providing complete guidance on the sync vs. async workflow decision.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_statusAInspect
Poll the progress of an async skill test. Returns iteration count, tool call steps, status (running/completed/failed), and result when done. (Advanced — use ateam_test_skill with wait=true for synchronous testing.)
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | The job ID returned by ateam_test_skill | |
| skill_id | Yes | The skill ID | |
| solution_id | Yes | The solution ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full disclosure burden. It effectively documents return values (iteration count, tool call steps, status states) and the state machine (running/completed/failed), though it omits error handling details or rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure: first sentence covers purpose and return values, second sentence provides usage guidance. No redundancy, every word earns its place, parenthetical placement is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter polling tool without output schema, the description compensates well by detailing the return structure and status states. It adequately covers the async testing workflow in conjunction with its sibling tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters including the critical relationship that job_id comes from ateam_test_skill. The description adds minimal semantic value beyond the schema's existing documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Poll' with clear resource 'progress of an async skill test', explicitly distinguishing it from synchronous alternatives. It clearly identifies this as an async progress-checking tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Advanced') and explicitly names the alternative tool (ateam_test_skill with wait=true) for synchronous testing, creating clear decision criteria between the async and sync paths.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_test_voiceAInspect
Simulate a voice conversation with a deployed solution. Runs the full voice pipeline (session → caller verification → prompt → skill dispatch → response) using text instead of audio. Returns each turn with bot response, verification status, tool calls, and entities. Use this to test voice-enabled solutions end-to-end without making a phone call.
| Name | Required | Description | Default |
|---|---|---|---|
| messages | Yes | Array of user messages to send sequentially (simulates a multi-turn phone conversation) | |
| skill_slug | No | Optional: target a specific skill by slug instead of using voice routing. | |
| timeout_ms | No | Optional: max wait time per skill execution in milliseconds (default: 60000). | |
| solution_id | Yes | The solution ID | |
| phone_number | No | Optional: simulated caller phone number (e.g., '+14155551234'). If the number is in the solution's known phones list, the caller is auto-verified. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses detailed behavioral flow (session → caller verification → prompt → skill dispatch → response) and return structure (bot response, verification status, tool calls, entities). Lacks explicit safety disclosure (state mutation, production impact) but 'simulate' implies read-only testing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) purpose declaration, (2) pipeline mechanics, (3) return value disclosure, (4) usage guidance. Well front-loaded with critical information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, description comprehensively explains return values (turns, verification status, entities). Addresses complexity of 5-parameter voice pipeline tool. Minor gap regarding whether simulation affects production state or requires specific auth.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description adds conceptual context linking parameters to voice simulation (text vs audio, multi-turn conversation) but does not add parameter-specific semantics beyond what schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Simulate' with resource 'voice conversation' and explicitly states it runs the 'full voice pipeline' using 'text instead of audio.' Effectively distinguishes from sibling tools like ateam_test_skill by emphasizing voice-specific stages (caller verification) and end-to-end scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'test voice-enabled solutions end-to-end without making a phone call.' Provides clear context for simulation vs. real calls. Could improve by explicitly contrasting with ateam_test_skill or ateam_test_pipeline for routing decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ateam_upload_connectorAInspect
Upload connector code to Core and restart — WITHOUT redeploying skills. Use this to update connector source code (server.js, UI assets, plugins) quickly. Set github=true to pull files from the solution's GitHub repo, or pass files directly. Much faster than ateam_build_and_run for connector-only changes.
| Name | Required | Description | Default |
|---|---|---|---|
| files | No | Files to upload. Alternative to github=true. | |
| github | No | If true, pull connector files from GitHub repo. Default: false. | |
| solution_id | Yes | The solution ID | |
| connector_id | Yes | The connector ID to upload (e.g. 'personal-assistant-ui-mcp') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses that the operation perform a restart and crucially does NOT redeploy skills. It lacks details on atomicity or rollback behavior, but covers the critical scope limitation that distinguishes it from full redeployment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first establishes action and key limitation (no skill redeploy), second specifies use case and file types, third covers input methods and performance comparison. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and the absence of output schema (which per rules need not be explained), the description adequately covers the tool's purpose, differentiates it from the major sibling alternative, and explains both input modalities. Adequate for a focused connector upload utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds valuable narrative context linking the 'github' flag to 'the solution's GitHub repo' (adding ownership context) and framing 'files' as the manual alternative to the GitHub pull, reinforcing their semantic relationship despite clear schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise verb-resource pair ('Upload connector code') and immediately distinguishes scope with 'WITHOUT redeploying skills.' It explicitly contrasts with sibling 'ateam_build_and_run' later in the text, clarifying this is for connector-only updates, not full builds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('connector-only changes'), names the alternative tool to avoid ('ateam_build_and_run'), and clarifies the two mutually exclusive input methods ('Set github=true... or pass files directly'). The performance hint ('Much faster') further guides selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!