Axint
Server Details
Apple-native AI agent execution: 35 MCP tools, 5 prompts, compile, validate, repair, and prove.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- agenticempire/axint
- GitHub Stars
- 7
- Server Listing
- axint
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 35 of 35 tools scored. Lowest: 2.1/5.
All 35 tools have clearly distinct purposes, with each tool targeting a specific operation (e.g., agent coordination, compilation, validation, project scaffolding). No two tools overlap in functionality, and descriptions clearly differentiate them.
All tools follow a consistent 'axint.<category>.<action>' naming pattern using lowercase and dots. The hierarchical structure is predictable, making it easy for agents to infer tool purpose from the name.
35 tools is high but appropriate given the comprehensive scope of the Axint ecosystem, covering agent coordination, project management, compilation, validation, repair, and more. The count is slightly on the heavy side but not excessive.
The tool surface covers the full development lifecycle for Apple/Axint projects: setup, scaffolding, compilation, validation, build, repair, agent coordination, and upgrade. No significant gaps are apparent.
Available Tools
35 toolsaxint.agent.adviceCInspect
Ask the local Axint project brain what this agent should do next. Reads project context, latest run proof, latest repair plan, and active file claims, then returns host-specific guidance for Codex, Claude, Cursor, Xcode, or another agent lane. Use: use when multiple tools or agents need the next safest move from local proof. Effects: reads local Axint context/proof and may refresh advice artifacts; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| agent | No | Active host/tool lane. Axint adapts advice to the tools this agent can actually use. | |
| issue | No | Optional bug, feature, or repair goal to turn into project-aware next moves. | |
| format | No | Output format. Defaults to markdown. | |
| changedFiles | No | Files in scope. Axint uses these to detect claim conflicts and recommend proof. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide no hints (all false), and the description only mentions reading context without disclosing potential side effects, modifications, or read-only guarantees. It adds minimal behavioral context beyond the basic reading operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very brief but ends abruptly with '...', indicating incompleteness. While front-loaded with the purpose, it fails to provide a complete sentence or necessary details, making it less effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 optional parameters, no output schema, and no annotations that clarify behavior, the description is severely incomplete. It does not specify return values, when parameters are needed, or how results should be interpreted, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain any of the 5 parameters, and the input schema has 0% description coverage. Parameters like cwd, agent, issue, format, and changedFiles are entirely undocumented in the description, leaving the agent without guidance on how to use them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'ask' and the resource 'local Axint project brain' to get advice on what the agent should do next. It distinguishes from sibling tools like axint.agent.claim or axint.agent.install by focusing on advisory context rather than actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining guidance on next steps, but does not provide explicit when-to-use or when-not-to-use conditions nor mention alternative tools for similar tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.agent.claimCInspect
Claim files before an agent edits them so other agents do not patch the same SwiftUI/App files concurrently. Claims are local, short-lived, and stored in .axint/coordination/claims.json. Use: use before editing shared files in parallel-agent work; release claims when done. Effects: writes local coordination claims under .axint/coordination; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| task | No | Task, bug, or repair pass this claim covers. | |
| agent | No | Agent lane creating the claim. | |
| files | Yes | Files to claim before editing. | |
| format | No | Output format. Defaults to markdown. | |
| ttlMinutes | No | Claim TTL in minutes. Defaults to 30. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With all annotations false, the description bears full responsibility for behavioral context. It reveals that claims are local and prevent concurrent patching, offering some behavioral insight. However, it does not cover what happens on failure, whether claims are persistent, or if there are side effects. The truncated description may omit key details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short and front-loaded with the core purpose, which is good. However, it appears truncated ('Claims are local,...'), suggesting incomplete content. It could be more concise if complete, but as given, it lacks closure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (6 params, no output schema, 0% coverage), the description is insufficient. It does not explain parameters, return values, or error states. The tool's behavior around claims (e.g., duration, scope) is partially hinted but not fully specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must explain parameters. It only implicitly references 'files' through 'Claim files'. Parameters like cwd, task, agent, format, and ttlMinutes are entirely unexplained, including the two enums. This severely hinders effective use.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool claims files before editing to prevent concurrent edits, specifying the action and purpose. It is specific to SwiftUI/App files and distinguishes from siblings like axint.agent.install or axint.agent.release by its unique role. However, the truncated sentence leaves slight ambiguity about the full scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear 'when to use' scenario: before agent edits to avoid conflicts. It does not, however, state when not to use or compare to alternatives like axint.agent.release, which might release claims. The guidance is implicit but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.agent.installCIdempotentInspect
Install the local Axint multi-agent project brain. Writes .axint/agent.json, .axint/context/latest.*, and .axint/coordination files so Codex, Claude, Cursor, Xcode, OpenClaw, and humans coordinate through the same local truth layer. Use: use once per project to create local multi-agent coordination; not needed for one-off compile. Effects: writes .axint/agent, context, and coordination files; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| agent | No | Active host/tool lane. Defaults to all. | |
| force | No | Rewrite the existing local agent config if present. | |
| format | No | Output format. Defaults to markdown. | |
| privacyMode | No | Privacy posture for this project. Defaults to local_only; source sharing is never enabled by default. | |
| projectName | No | Optional project name override. | |
| providerMode | No | Optional model-provider posture for future AI-enhanced advice. Defaults to none. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide idempotentHint and destructiveHint; description adds that it writes specific files (agent.json, context/latest.*, coordination files) which is useful context but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single incomplete sentence ending with 'so...'; lacks structure and is too brief given the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 7 parameters and no output schema, the description omits essential details about parameter effects, return values, and side effects beyond file writes; severely incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% coverage and description does not explain any of the 7 parameters (cwd, agent, force, format, privacyMode, projectName, providerMode), leaving the agent without guidance on their purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Install the local Axint multi-agent project brain' and lists specific files written, distinguishing it from siblings like axint.agent.advice or axint.agent.claim.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives; no mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.agent.releaseAIdempotentInspect
Release active local Axint file claims for this agent after finishing or abandoning a task. This keeps Codex, Claude, Cursor, and Xcode from blocking each other on stale claims. Use: use after finishing or abandoning claimed files so other agents are unblocked. Effects: updates local coordination claims under .axint/coordination; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| all | No | Release all matching active claims. | |
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| agent | No | Agent lane releasing claims. | |
| files | No | Optional files to release. Omit to release this agent's claims. | |
| format | No | Output format. Defaults to markdown. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate idempotentHint=true (safe to retry) and destructiveHint=false (non-destructive). The description adds the important detail that releases happen locally after task completion, which is behavioral context beyond annotations. No contradiction is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—one short sentence that immediately conveys the tool's purpose. It is well front-loaded with the action and leaves no room for extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters (none described), no output schema, and a moderate complexity (releasing claims with multiple configurable options), the description is incomplete. It fails to cover parameter roles, return values, or side effects beyond the basic release action.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain any of the 5 parameters ('all', 'cwd', 'agent', 'files', 'format'). With 0% schema description coverage, the burden is on the description, but it provides no parameter-level guidance. Users must infer from names alone, which is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: release active local Axint file claims. It uses a specific verb ('Release') and identifies the resource ('Axint file claims') and context ('after finishing or abandoning a task'), distinguishing it from sibling tools like 'axint.agent.claim' and 'axint.agent.install'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool: 'after finishing or abandoning a task'. This provides clear context. However, it does not explicitly mention when not to use it or suggest alternative tools, though the sibling list implies alternatives exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.cloud.checkARead-onlyIdempotentInspect
Run an agent-callable Cloud Check against Swift or Axint TypeScript source. Accepts inline source or a sourcePath, then returns a Cloud-style verdict, Apple-specific findings, next... Use: use for Apple-aware source review and repair prompts; provide evidence for UI/runtime claims. Effects: read-only response from provided source/path; may use configured Cloud Check endpoint; no source is sent unless provided.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Output format. markdown returns the report, json returns structured data, prompt returns only the repair... | |
| source | No | Inline Swift or Axint TypeScript source to check. Prefer sourcePath when possible; inline source should be... | |
| fileName | No | Optional display name for diagnostics when passing inline source. Defaults to sourcePath or <cloud-check>. | |
| language | No | Optional language override. Omit to infer from file extension and source contents. | |
| platform | No | Optional target platform hint. Use macOS to catch common iOS-only SwiftUI modifiers in Mac app work. | |
| sourcePath | No | Optional file path to read and check. Use this from Xcode agents after writing a generated Swift file. | |
| testFailure | No | Optional short failing unit/UI-test excerpt. Use this when static checks pass but Xcode tests still fail;... | |
| xcodeBuildLog | No | Optional short Xcode build excerpt. Pass only the failing lines or focused proof summary; full logs should... | |
| actualBehavior | No | Optional observed behavior for behavior-gap checks. Pair with expectedBehavior so Cloud Check can return a... | |
| runtimeFailure | No | Optional crash, freeze, hang, launch timeout, console, preview, or runtime failure text. Include the... | |
| expectedVersion | No | Optional expected Axint version for this project/session. Cloud Check also reads .axint/project.json when... | |
| expectedBehavior | No | Optional expected behavior for behavior-gap checks. Pair with actualBehavior when the bug is semantic rather... | |
| projectContextPath | No | Optional path to a local .axint/context/latest.json pack written by axint.project.index. Omit when... | |
| cloudRulesetVersion | No | Optional hosted/cloud ruleset version when different from the local compiler package. | |
| localPackageVersion | No | Optional local CLI/package version when the caller knows it. Used only for version-truth reporting. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds behavioral details beyond annotations: 'returns a Cloud-style verdict, Apple-specific findings, next steps, an AI repair prompt, and a redacted compiler feedback signal.' It also states 'No files are written,' consistent with read-only. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense paragraph (~100 words) with no fluff. It is front-loaded with purpose, then parameters, then output description, then usage context. Every sentence adds value; it is concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 optional parameters, no output schema), the description covers the high-level purpose, parameter intent, output components, and use case. The high schema coverage compensates for missing output schema details. Complete enough for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). The description adds context for several parameters, e.g., 'Use macOS to catch common iOS-only SwiftUI modifiers in Mac app work' for platform, and explains format values. This goes beyond schema to guide usage, meriting a higher score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's verb ('Run an agent-callable Cloud Check'), resource ('Swift or Axint TypeScript source'), and scope ('Accepts inline source or a sourcePath'). It distinguishes from siblings like axint.compile and axint.swift.validate by specifying the cloud-check nature and the output components (verdict, findings, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context regarding when to use the tool: 'closes the browser-only gap for Xcode and MCP agents' and 'during the build loop.' It does not explicitly exclude alternatives, but the niche is clear. Sibling tools like axint.swift.validate exist, but the description does not compare them, missing some guidance for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.compileARead-onlyIdempotentInspect
Compile TypeScript source (defineIntent() call) into native Swift App Intent code. Returns { swift, infoPlist?, entitlements? } as a string — no files written, no network requests. On validation failure, returns diagnostics... Use: use when TypeScript DSL source should become Swift; use validate for cheaper preflight only. Effects: read-only generated Swift/diagnostics; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | When true (default), pipes generated Swift through swift-format with Axint's house style. Falls back to raw... | |
| source | Yes | Full TypeScript source code containing a defineIntent() call. Must be a complete file starting with an axint... | |
| fileName | No | Optional file name used in diagnostic messages, e.g., 'SendMessage.intent.ts'. Defaults to 'input.ts' if... | |
| emitInfoPlist | No | When true, returns an Info.plist XML fragment declaring the intent's infoPlistKeys. Only relevant for... | |
| emitEntitlements | No | When true, returns an .entitlements XML fragment for the intent's declared entitlements. Only relevant for... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains the limitation of 'full TS compilation not available' on this endpoint and references CLI alternatives. Annotations already cover read-only, non-destructive, and idempotent traits, so the description appropriately supplements with operational constraints without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by critical usage notes in two concise sentences. Every sentence earns its place by providing essential context and alternatives, with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (compilation with multiple parameters) and rich annotations, the description is mostly complete. It covers purpose, limitations, and alternatives effectively. However, without an output schema, it could benefit from mentioning the expected return format (e.g., compiled code or errors), but the annotations and context signals provide sufficient safety and structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining the 'source' parameter's requirements or the optional flags' implications. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('compile TypeScript source into native Swift App Intent code') and resource ('defineIntent() call'), and distinguishes it from siblings by explicitly mentioning axint.schema.compile as an alternative for better results on this endpoint. The purpose is precise and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: it notes that full TS compilation is not available on this remote endpoint and recommends axint.schema.compile for best results, while also mentioning that full TS compilation is available via the CLI. This clearly defines the tool's context and limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.context.docsARead-onlyIdempotentInspect
Return the project-local Axint docs context that agents should reload after new chats or context compaction. This is the durable docs memory that keeps the agent using Axint instead of forgetting the workflow. Use: use after compaction when the agent needs workflow docs without rereading the whole site. Effects: read-only generated docs context; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Target Apple platform, such as macOS, iOS, visionOS, or all. | |
| projectName | No | Project name to include in the docs context. | |
| expectedVersion | No | Expected Axint version to compare against axint.status. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds value by stating it's 'durable docs memory' but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key action, no unnecessary words. Each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity, no output schema, and annotations covering safety, description is sufficient. Could mention return format but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. Description adds no additional parameter details beyond schema, meeting the baseline for full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns project-local Axint docs context, with a specific verb (Return) and resource. It distinguishes from siblings like axint.context.memory by focusing on docs context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies when to use: after new chats or context compaction. Implies it should be used sparingly. Does not explicitly mention alternatives but context signals show sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.context.memoryARead-onlyIdempotentInspect
Return the compact Axint operating memory that agents should reload at new chat start, after context compaction, or after long coding drift. Use this to keep Axint top-of-mind without rereading the full docs. Use: use after compaction or session restart when the agent needs compact operating rules. Effects: read-only generated context; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Target Apple platform, such as macOS, iOS, visionOS, or all. | |
| projectName | No | Project name to include in the memory. | |
| expectedVersion | No | Expected Axint version to compare against axint.status. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds context about when to reload (e.g., after drift) and the purpose of the return value, which enriches understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, highly efficient, and front-loads the core purpose and key usage scenarios. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description clearly states the return is 'compact Axint operating memory' but does not detail its structure or contents. Given no output schema, a bit more on format could help, but the usage context is sufficiently complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters have 100% schema description coverage with clear explanations. The tool description adds no additional semantics beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'compact Axint operating memory' and specifies exact use cases (new chat start, after context compaction, long coding drift). It implicitly distinguishes from sibling 'axint.context.docs' by noting it avoids rereading full docs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage context: reload at new chat start, after context compaction, or after long coding drift. It also advises using this to keep Axint top-of-mind without full docs. However, it does not explicitly state when not to use or list alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.doctorARead-onlyIdempotentInspect
Audit the current Axint runtime and project wiring: running MCP version, expected version, Node/npm/npx paths, project .mcp.json, AGENTS.md, CLAUDE.md, .axint/project.json, and Xcode Claude Agent registration. Use this when an agent might be connected... Use: call when MCP wiring, package paths, Xcode setup, or project memory may be stale. Effects: read-only inspection; writes no files; no auth or network required.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory to inspect. Defaults to the MCP process cwd. | |
| format | No | Output format. Defaults to markdown. | |
| expectedVersion | No | Expected Axint version. If provided and the running MCP version differs, doctor returns a blocker. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds behavioral details: it checks specific files and returns a 'blocker' on version mismatch. This supplements the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the action and list of checks, followed by usage context. Every sentence adds value; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the tool's checks and the 'blocker' behavior. It covers both functionality and usage context. Slight lack of detail on output format, but sufficient for a diagnostic tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters have descriptions in the input schema (100% coverage). The description does not add new parameter semantics beyond mentioning 'expected version'. Baseline is 3 as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Audit' and specifies the resource: 'current Axint runtime and project wiring'. It lists specific files and checks, distinguishing it from sibling tools like validate or check. The use case is explicitly given, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit usage scenarios: 'when an agent might be connected to a stale Axint process' and 'when a new project needs first-try MCP setup proof'. It does not mention alternatives, but the context is clear and no exclusions are needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.featureARead-onlyIdempotentInspect
Generate a scaffolded Apple-native feature package from a description. Returns multiple files: compile-aware Swift source, companion widget/view, Info.plist fragments, entitlements, and XCTest scaffolds — all structured file-by-file so an Xcode agent can write each... Use: use for new Apple-native surfaces; not for repairing existing app bugs. Effects: read-only generated output; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | PascalCase feature name, e.g., 'LogWaterIntake'. If omitted, inferred from the description. Used as the base... | |
| domain | No | Apple App Intent domain. One of: messaging, productivity, health, social, community, collaboration,... | |
| format | No | When true (default), pipes every generated Swift file through swift-format with Axint's house style. Falls... | |
| params | No | Explicit parameter definitions as { fieldName: typeString }. E.g., { amount: 'double', unit: 'string' }. If... | |
| appName | No | The target app name, used in generated comments and test references. E.g., 'HealthTracker'. Optional. | |
| context | No | Optional nearby SwiftUI/design context. Axint uses this as a weak hint for layout primitives, platform... | |
| platform | No | Target Apple platform for generated starter UI. Use 'macOS' to avoid iOS-only SwiftUI affordances in... | |
| surfaces | No | Which Apple surfaces to generate. 'intent' produces an App Intent struct for Siri/Shortcuts/Spotlight.... | |
| description | Yes | What the feature does, in natural language. E.g., 'Let users log water intake via Siri' or 'Add a... | |
| componentKind | No | Optional component blueprint for the component surface, such as feedCard, mediaCard, utilityRow, avatar,... | |
| tokenNamespace | No | Optional Swift token enum generated by axint.tokens.ingest, e.g., 'SwarmTokens'. When provided, generated... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the tool returns 'multiple files... structured file-by-file' and that it has 'no files written, no network requests, no side effects.' While annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, the description elaborates on the specific nature of the side-effect-free generation process and output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states the core purpose and outputs, second explains the composition workflow, third clarifies what the tool doesn't do. Every sentence adds essential information without redundancy, making it appropriately concise and front-loaded with the main functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, nested objects, no output schema) and rich annotations, the description provides good contextual completeness. It explains the generation process, output format, usage workflow, and limitations. The main gap is the lack of output schema documentation, but the description partially compensates by detailing the return components.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description doesn't add significant parameter semantics beyond what's in the schema, though it implies the 'description' parameter is the primary input for generation. The baseline score of 3 reflects adequate parameter documentation through the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Generate a complete Apple-native feature package from a description.' It specifies the output components (Swift source, widget/view, Info.plist fragments, entitlements, XCTest scaffolds) and distinguishes from siblings by emphasizing it's for feature package generation rather than compilation, scaffolding, validation, or template operations listed in sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Designed for composition with Xcode MCP tools: call axint.feature to generate the package, then use XcodeWrite to place each file.' It also clarifies when NOT to use it: 'No files written, no network requests, no side effects,' indicating this is a generation-only tool that requires follow-up actions for file placement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.feedback.createCInspect
Create or read a privacy-safe learning packet for Axint repair intelligence. Packets include project shape, diagnostic codes, issue class, redacted evidence, and likely product owner, but never include source code. Users... Use: use when Axint output was weak and you need a privacy-safe issue packet; not for sending source. Effects: writes or reads redacted .axint/feedback packets; never includes source by default.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| agent | No | Active host/tool lane. | |
| issue | No | Bug, weak Axint output, or failed repair behavior. | |
| format | No | Output format. Defaults to json. | |
| latest | No | When true, return the latest local feedback packet instead of creating a new one. | |
| source | No | Optional inline Swift source used locally only. | |
| fileName | No | Display file name when passing inline source. | |
| platform | No | Target Apple platform hint. | |
| sourcePath | No | Optional suspected Swift file path used locally only. | |
| testFailure | No | Optional focused unit/UI-test failure text. | |
| changedFiles | No | Changed files to pin into the context pack. | |
| xcodeBuildLog | No | Optional Xcode build/test log evidence. | |
| actualBehavior | No | Optional actual behavior. | |
| runtimeFailure | No | Optional crash, freeze, hang, or runtime failure text. | |
| expectedBehavior | No | Optional expected behavior. | |
| projectContextPath | No | Optional .axint/context/latest.json path. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds 'privacy-safe' and mentions learning packet structure, providing some behavioral context beyond annotations. However, it does not detail side effects, mutation behavior, or authorization needs, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is truncated mid-sentence, making it incomplete and requiring additional context. It is not efficiently compact and fails to convey complete information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 16 optional parameters, no output schema, and minimal schema coverage, the description is severely under-informative. It lacks essential details about usage, parameter relationships, and expected results, making it insufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate by explaining key parameters. It only vaguely mentions 'project shape, diagnostic codes, issue' without mapping to the 16 parameters, leaving almost all parameter semantics undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool creates or reads a privacy-safe learning packet for Axint repair intelligence, with mention of included fields. However, it does not differentiate from sibling tools like axint.fix-packet, which may have overlapping purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not specify prerequisites, contexts, or exclude cases, leaving the agent without decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.fix-packetARead-onlyIdempotentInspect
Read the latest Fix Packet that Axint emitted locally after a compile or watch run. Returns the exact repair artifact that AI tools or Xcode helpers should consume next: verdict, top findings, full diagnostics, next steps, and an AI-ready fix prompt.... Use: use after a local compile/watch/check emitted a packet; not a new analysis pass. Effects: read-only local artifact read; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Optional working directory to search from. Axint walks upward from this directory until it finds... | |
| format | No | Output format. json returns the full packet, markdown returns the human-readable report, and prompt returns... | |
| packetDir | No | Optional explicit packet directory override. Use this if the latest packet lives somewhere other than... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and idempotentHint. Description adds value by detailing return components (verdict, findings, diagnostics, next steps, fix prompt). Does not mention error handling for missing packet, but annotations cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single focused paragraph with front-loaded purpose. Every sentence adds value, no wasted words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description details return content. Covers parameters and usage context. Lacks description of error cases, but overall sufficient for a read-only, idempotent tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all three parameters. The description does not add new semantics beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it reads the latest Fix Packet emitted by Axint after compile or watch. Verbs 'read' and 'returns' specify action and resource. Differentiates from siblings like axint.compile by emphasizing no recompilation needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use after axint compile or watch to get the latest packet without another compile pass. Provides clear context for when to invoke, contrasting with compile tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.project.indexCIdempotentInspect
Scan the local Apple project and write a compact .axint/context pack so Axint can reason over changed files, nearby SwiftUI surfaces, and interaction-risk files instead of only one source file at a time. Use: use before project-aware repair, multi-file SwiftUI work, or interaction-risk analysis. Effects: writes .axint/context unless dryRun=true; reads local project files only.
| Name | Required | Description | Default |
|---|---|---|---|
| dryRun | No | When true, returns the index without writing .axint/context files. | |
| format | No | Output format. Defaults to markdown. | |
| targetDir | No | Project directory to index. Defaults to the current working directory. | |
| includeGit | No | Whether to include git changed-file discovery. Defaults to true. | |
| projectName | No | Optional project name override for the context pack. | |
| changedFiles | No | Optional changed files to pin into the context pack. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide idempotentHint=true and destructiveHint=false. The description states it writes a pack, implying a non-read-only operation, consistent with readOnlyHint=false. But it does not add details about side effects, overwriting, or permissions beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence of about 15 words that communicates the main action and purpose. It is front-loaded and has no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with no schema descriptions and no output schema, the description is too brief to be complete. It does not explain the output format (context pack) or how the tool fits into the broader workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden for parameter semantics. Only 'changedFiles' is hinted by mentioning 'changed files' in the description; no other parameter (dryRun, format, targetDir, includeGit, projectName) is explained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Scan' and resource 'local Apple project', and explains the outcome (write compact context pack) and purpose (reasoning over changed files). However, it does not differentiate from sibling 'axint.project.pack', which likely has a similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. No exclusions, prerequisites, or context about when it is appropriate to call this tool vs other sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.project.packARead-onlyIdempotentInspect
Generate the Axint project-start pack for a new Apple app without writing files. Returns .mcp.json, AGENTS.md, CLAUDE.md, .axint/AXINT_MEMORY.md, .axint/project.json, and .axint/README.md so an Xcode/Codex/Claude agent can install the... Use: use to bootstrap an Apple project with Axint instructions; not to inspect an existing project. Effects: read-only generated file pack; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | MCP mode. local uses npx stdio; remote uses mcp.axint.ai. | |
| agent | No | Agent target. Defaults to all. | |
| format | No | Output format. Defaults to markdown. | |
| targetDir | No | Project directory label to embed in the report. | |
| projectName | No | Project name to embed in the generated instructions. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits beyond annotations: 'without writing files' (non-destructive), returns multiple files (output details), and explains the enabled workflow. Annotations already provide readOnlyHint and idempotentHint, but the description adds context about the exact output and purpose, with no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently worded and front-loaded with the main action. It packs significant detail in a single sentence, though some phrasing like 'so an Xcode/Codex/Claude agent can install the exact first-try workflow' could be streamlined without losing meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by listing the returned files and the workflow purpose. However, it does not explain how the generated pack should be used or integrate with other tools, leaving some gaps for an agent needing full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions in the schema. The description does not add additional semantics for parameters beyond mentioning targeted files. Baseline 3 is appropriate as the schema already documents parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Generate') and resource ('Axint project-start pack') with clear scope ('for a new Apple app'). It lists the exact files returned and distinguishes the tool from siblings like axint.scaffold or axint.status, as it is the only one dedicated to generating the initial project pack.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for new app projects and for agents to install the first-try workflow, but it does not explicitly state when to use this tool versus alternatives like axint.scaffold or axint.session.start. There is no guidance on when not to use it or what prerequisites exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.project.syncVersionAIdempotentInspect
Update Axint-owned project-pack version hints after an upgrade. Use this after axint.upgrade or npm/pip upgrades so .axint/project.json, AGENTS.md, CLAUDE.md, and Axint rehydration docs stop pointing agents at an older package version. Use: use after package upgrades so local project-pack hints stop naming old Axint versions. Effects: updates Axint-owned project instruction files unless dryRun=true; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| dryRun | No | When true, reports the files that would change without writing them. | |
| format | No | Output format. Defaults to markdown. | |
| version | No | Axint version to write. Defaults to the running MCP server version. | |
| targetDir | No | Project directory to update. Defaults to the current working directory. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (idempotentHint=true, destructiveHint=false), the description adds valuable context: updates specific files (.axint/project.json, AGENTS.md, etc.), dryRun behavior, and no network activity. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat redundant ('Use: use after...') but still efficient. Two sentences cover purpose, usage, and effects. Could be slightly tighter, but acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description covers all necessary aspects: when to use, what it does, effects, and dryRun behavior. No missing key information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter documented. The description adds minimal value beyond schema (e.g., mentions dryRun effect and version default), so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it updates version hints after upgrades, specifying the verb 'Update' and the resource (project-pack version hints). It lists affected files and distinguishes itself from sibling tools like axint.upgrade.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use: 'after axint.upgrade or npm/pip upgrades'. It provides clear context and repeats guidance but does not explicitly mention when not to use or list alternatives. Still, it is clear enough for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.registry.searchBRead-onlyIdempotentInspect
Search the Axint Registry for already-published packages that match a natural-language query. Use this BEFORE calling axint.feature or axint.compile so the agent can install an existing package instead of... Use: use before generating code to find reusable packages; not for validating local Swift. Effects: read-only local registry search using AXINT_REGISTRY_PATH or sibling checkout; no network by default.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | Optional surface filter. One of: app-intent, view, widget, store, app, component. Loose match; 'intent'... | |
| limit | No | Hard cap on returned hits. Defaults to 10. | |
| query | Yes | Free-form description of what the agent is about to build. E.g., 'log a workout', 'capture a voice note',... | |
| minScore | No | Minimum normalized match score (0..1) below which results are dropped. Defaults to 0.1. | |
| platform | No | Optional platform filter. One of: iOS, macOS, watchOS, tvOS, visionOS. Filters by the manifest's... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint true and idempotentHint true. The description adds the context of searching 'already-published packages' but does not expand on behavior like result pagination or error scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short and to the point, with no unnecessary words. However, the second sentence appears truncated, which slightly reduces clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and 5 parameters all undocumented, the description is insufficient for an agent to invoke the tool correctly without additional context. The purpose is clear but operational details are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%. The description only mentions the 'query' parameter implicitly (natural-language query) but provides no explanation for 'kind', 'limit', 'minScore', or 'platform'. This fails to compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches the Axint Registry for already-published packages using a natural-language query. It distinguishes from siblings by mentioning usage before calling axint.feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to use this tool before calling axint.feature, providing clear context. However, it does not specify when not to use it or alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.repairCInspect
Plan a project-aware Apple repair for existing apps. Indexes the local project, classifies build/UI/runtime evidence, runs Cloud Check when source is provided, ranks likely SwiftUI/App files, returns a... Use: use for existing app bugs with logs, UI symptoms, or runtime evidence; not for greenfield generation. Effects: writes .axint/repair and privacy-safe .axint/feedback artifacts; reads local project files.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| agent | No | Active host/tool lane. Axint adapts the repair plan so Codex/Claude/Cursor avoid Xcode-only write tools. | |
| issue | Yes | The broken behavior or repair goal, e.g. 'comment box is visible but cannot be tapped'. | |
| format | No | Output format. markdown returns the report, json returns structured data, and prompt returns the agent... | |
| source | No | Optional inline Swift source for the suspected file. Source is not included in the feedback packet. | |
| fileName | No | Display file name when passing inline source. | |
| platform | No | Target Apple platform hint. | |
| sourcePath | No | Optional suspected Swift file path. Axint reads it locally for Cloud Check and project anchoring. | |
| testFailure | No | Optional focused unit/UI-test failure text. | |
| writeReport | No | Whether to write .axint/repair/latest.json and latest.md. Defaults to true. | |
| changedFiles | No | Changed files to pin into the project context pack. | |
| writeFeedback | No | Whether to write a privacy-safe .axint/feedback packet. Defaults to true. | |
| xcodeBuildLog | No | Optional Xcode build/test log evidence. | |
| actualBehavior | No | Optional observed behavior from the failing run. | |
| runtimeFailure | No | Optional crash, freeze, hang, or runtime failure text. | |
| expectedBehavior | No | Optional expected behavior for the failing feature. | |
| projectContextPath | No | Optional .axint/context/latest.json path. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are all false (unknown side effects). The description lists actions (index, classify, run Cloud Check) but does not clarify if these are read-only or destructive, or what happens to the project. No behavioral traits beyond the listed steps are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single incomplete sentence (ending with '...'), which is concise but lacks structure. It could convey more information without becoming verbose by using bullet points or clearer phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (17 parameters, no output schema, many siblings), the description fails to explain return values, the Cloud Check process, or the overall output. Missing essential context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and no parameter explanations in the description, the agent has no guidance on the meanings of required parameter 'issue' or any other of the 17 parameters. This is a critical gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it plans a project-aware Apple repair and performs indexing, classification, and Cloud Check. This gives a clear verb and resource, but 'repair' is vague compared to sibling fix tools. It does distinguish itself by mentioning project-awareness and multiple steps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings like axint.swift.fix or axint.fix-packet. The description implies it's for Apple app repairs but offers no alternatives or exclusions, leaving the agent to guess.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.runCInspect
Run the enforced Axint Apple build loop outside the Xcode UI. Starts or refreshes the Axint session, validates Swift, runs Cloud Check, executes xcodebuild build/test when a project or workspace... Use: use when the agent must prove Swift validation, Cloud Check, Xcode build/test, and runtime evidence. Effects: starts child processes, writes .axint/run artifacts, may run xcodebuild/tests, and may call Cloud Check.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory to run. Defaults to the MCP process cwd. | |
| agent | No | Current agent host lane. Axint uses this to start the right session profile and return host-safe repair... | |
| dryRun | No | Plan xcodebuild commands without executing them. | |
| format | No | Output format. markdown returns the run report, json returns structured data, prompt returns only the repair... | |
| scheme | No | Xcode scheme. If omitted, Axint tries to infer one. | |
| project | No | Path to .xcodeproj, relative to cwd or absolute. | |
| runtime | No | After build, launch the built macOS .app and capture runtime/timeout evidence. | |
| platform | No | Target Apple platform. Defaults to macOS unless inferred from destination. | |
| testPlan | No | Optional xcodebuild -testPlan for test runs. | |
| skipBuild | No | Skip xcodebuild build and only run Axint static gates. | |
| skipTests | No | Skip xcodebuild test. | |
| workspace | No | Path to .xcworkspace, relative to cwd or absolute. | |
| background | No | Start the run and immediately return a resumable job id instead of waiting for long Xcode build, test, or... | |
| destination | No | xcodebuild destination, e.g. platform=macOS or platform=iOS Simulator,name=iPhone 16. | |
| onlyTesting | No | Optional focused xcodebuild -only-testing selectors, e.g.... | |
| projectName | No | Project name for Axint session and report labels. | |
| writeReport | No | Whether to write .axint/run/latest.json and latest.md. Defaults to true. | |
| configuration | No | Xcode build configuration, e.g. Debug or Release. | |
| includeSource | No | Include full Swift source and full command output in json output. Defaults to false so long agent threads... | |
| modifiedFiles | No | Changed Swift files to validate and Cloud Check. Pass this whenever possible; if omitted, Axint validates... | |
| actualBehavior | No | Actual runtime behavior for semantic bug checks. | |
| runtimeFailure | No | Crash, freeze, hang, launch timeout, or UI failure evidence. | |
| timeoutSeconds | No | Build/test timeout in seconds. | |
| derivedDataPath | No | Optional xcodebuild -derivedDataPath. | |
| expectedVersion | No | Expected Axint package version for the run session. | |
| expectedBehavior | No | Expected runtime behavior for semantic bug checks. | |
| runtimeTimeoutSeconds | No | Runtime launch timeout in seconds. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-read-only (readOnlyHint=false) and open world (openWorldHint=true). The description adds that it 'starts or refreshes the Axint session', but does not detail side effects or what is modified. Additional behavioral context would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (one incomplete sentence) and truncated. While concise, it lacks proper structure and is insufficient for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 27 parameters, no output schema, and no parameter descriptions, the description is severely incomplete. It fails to provide necessary context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the 27 parameters. It only mentions high-level actions, leaving parameter meanings entirely unspecified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'Run' and the resource 'Axint Apple build loop', and distinguishes from sibling tools like axint.compile, axint.run.cancel, etc. It clearly states the tool's primary action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for running the build loop outside Xcode UI, but provides no explicit when-to-use, when-not-to-use, or alternatives. Given many siblings, this is a gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.run.cancelADestructiveInspect
Cancel the latest or selected Axint run by killing active child process groups. Use this when xcodebuild or a UI-test runner survived an MCP timeout or transport close. Use: use only to stop an active Axint run or stuck child process group. Effects: destructive: kills active Axint child process groups; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Optional Axint run id. Defaults to latest active run. | |
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| format | No | Output format. Defaults to markdown. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint: true. The description adds that it 'kill[s] active child process groups,' giving a specific behavioral trait beyond the annotation. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with the core purpose in the first sentence. However, it is truncated and lacks structured parameter information. It could be more organized but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and three parameters with no descriptions, the description fails to provide a complete picture. The agent lacks information about return values, parameter behavior, and format options, making the tool risky to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only hints that 'selected' relates to the id parameter, but does not explain id, cwd, or format. The agent must infer parameter meanings, which is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the purpose: 'Cancel the latest or selected Axint run by killing active child process groups.' It uses a specific verb (cancel) and resource (Axint run), and distinguishes from siblings like axint.run (likely starts runs) and axint.run.status (check status).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests when to use it: 'Use this when xcodebuild or a UI-test runner survived an...' providing a concrete scenario. However, it is truncated and lacks explicit alternatives or when-not-to-use guidance. The context is useful but incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.run.statusCRead-onlyIdempotentInspect
Read the latest or selected Axint run job record, including active child process IDs. Use this when a long xcodebuild run may still be active after an MCP timeout or client disconnect. Use: use after MCP timeouts or long builds to resume without guessing whether xcodebuild is still active. Effects: read-only local run/job inspection; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Optional Axint run id. Defaults to latest active run. | |
| cwd | No | Project directory. Defaults to the MCP process cwd. | |
| format | No | Output format. Defaults to markdown. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only, idempotent, and non-destructive. The description adds the detail about returning active child process IDs, which is useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single truncated sentence. While potentially concise, the truncation makes it incomplete and poorly structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and a truncated description, the tool lacks sufficient detail about return values and behavior. The description is incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%. The description does not explain the parameters (id, cwd, format). It implies id selects a specific run but gives no details on usage, default behavior, or format options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads the latest or selected Axint run job record, including active child process IDs. It distinguishes from sibling tools like axint.run (starting runs) and axint.run.cancel (cancelling runs).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description begins with 'Use this when a long xcodebuild run may still be...' but is truncated. It fails to provide complete guidance on when to use the tool or when alternatives would be better.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.scaffoldARead-onlyIdempotentInspect
Generate a starter TypeScript intent file from a name and description. Returns a complete defineIntent() source string ready to save as a .ts file — no files are written, no network requests made. On invalid domain values, returns an error string.... Use: use to create a small TypeScript intent starter; use templates for richer examples. Effects: read-only generated TypeScript; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | PascalCase intent name, e.g., 'SendMessage' or 'CreateEvent'. Must start with an uppercase letter and... | |
| domain | No | Apple App Intent domain. One of: messaging, productivity, health, social, finance, commerce, media,... | |
| params | No | Initial parameters for the intent. Each item needs name (camelCase), type (string | int | double | float |... | |
| description | Yes | Human-readable description of what the intent does, shown to users in Shortcuts and Spotlight, e.g., 'Send a... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. Annotations indicate read-only, non-destructive, and idempotent operations, but the description clarifies that 'no files are written, no network requests made' and specifies error behavior: 'On invalid domain values, returns an error string.' It also explains the output's compatibility: 'The output compiles directly with axint.compile.' This provides practical implementation details that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences that each serve a distinct purpose: stating the core functionality, explaining behavioral details, and providing usage guidelines. There's no redundant information, and key points are front-loaded. Every sentence earns its place by adding unique value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (generating TypeScript code with multiple parameters), the description is complete enough. It covers purpose, behavioral constraints, error handling, output format, and usage alternatives. While there's no output schema, the description specifies the return type ('complete defineIntent() source string' or 'error string') and compatibility with axint.compile. The annotations provide safety context, and the description fills in practical implementation details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add significant parameter-specific information beyond what's in the schema. It mentions 'name and description' generically but doesn't provide additional syntax, format, or usage details for parameters. The baseline score of 3 reflects adequate but not enhanced parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate a starter TypeScript intent file from a name and description.' It specifies the verb ('Generate'), resource ('starter TypeScript intent file'), and output format ('complete defineIntent() source string ready to save as a .ts file'). It distinguishes from siblings by mentioning alternatives like axint.templates.get and axint.schema.compile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'Use this when creating a new intent from scratch; use axint.templates.get for a working reference example, or axint.schema.compile to generate Swift without writing TypeScript.' It clearly defines the primary use case and names specific sibling tools for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.schema.compileARead-onlyIdempotentInspect
Compile a minimal JSON schema directly to Swift, bypassing the TypeScript DSL entirely. Supports intents, views, components, widgets, and full apps via the 'type' parameter. Uses ~20 input tokens vs hundreds for TypeScript — ideal for LLM agents... Use: use for token-light JSON-to-Swift generation; use compile for full TypeScript DSL control. Effects: read-only Swift generation; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| body | No | View/widget only. Raw SwiftUI code for the body, e.g., 'VStack { Text("Hello") }'. Wrapped in the struct... | |
| name | Yes | PascalCase name, e.g., 'CreateEvent' for intents, 'EventListView' for views, 'StepsWidget' for widgets. Used... | |
| type | Yes | What to compile. Determines which other parameters are relevant: intent uses params/domain/title; view uses... | |
| entry | No | Widget only. Timeline entry fields as { fieldName: typeString }. E.g., { steps: 'int' }. Do not include... | |
| props | No | View only. Prop definitions as { fieldName: typeString }. E.g., { title: 'string', count: 'int' }. Same type... | |
| state | No | View only. State variable definitions as { fieldName: { type: 'string', default?: value } }. Generates... | |
| title | No | Human-readable title shown in Shortcuts/Spotlight. Intent only. E.g., 'Create Event'. Defaults to a... | |
| domain | No | Apple App Intent domain. Intent only. One of: messaging, productivity, health, social, finance, commerce,... | |
| format | No | When true (default), pipes generated Swift through swift-format with Axint's house style. Falls back to raw... | |
| params | No | Intent only. Parameter definitions as { fieldName: typeString }. E.g., { recipient: 'string', amount:... | |
| scenes | No | App only. Scene definitions for the @main App struct. At least one scene with kind 'windowGroup' is... | |
| families | No | Widget only. Supported widget sizes: systemSmall, systemMedium, systemLarge, systemExtraLarge,... | |
| platform | No | Optional target Apple platform hint for view/widget generation. Use macOS when the host project is a Mac... | |
| description | No | Description of what this intent/view/widget does. Shown to users in system UI for intents. Optional but... | |
| displayName | No | Widget only. Human-readable name shown in the widget gallery. E.g., 'Daily Steps'. Defaults to a spaced... | |
| componentKind | No | Component only. Optional known component shape. Use cardArchetypes for a multi-component card kit, or omit... | |
| tokenNamespace | No | Optional Swift token enum generated by axint.tokens.ingest, e.g., 'SwarmTokens'. Generated views/components... | |
| refreshInterval | No | Widget only. Timeline refresh interval in minutes. E.g., 30 for half-hourly updates. Defaults to 60. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. Annotations indicate it's read-only, non-destructive, and idempotent, but the description specifies that it 'Returns Swift source with token usage stats; no files written, no network requests' and 'On invalid input, returns an error message describing the issue.' This clarifies output format and error handling, which are not covered by annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information in the first sentence and efficiently structured into two sentences. Each sentence earns its place by covering purpose, token optimization, output behavior, error handling, and sibling tool differentiation, with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (14 parameters, nested objects) and lack of output schema, the description is mostly complete. It explains what the tool returns (Swift source with token stats) and error behavior, but could benefit from more detail on output structure or examples. However, it effectively covers usage context and behavioral traits, making it adequate for an agent to understand the tool's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions the 'type' parameter and its relevance to other parameters, but with 100% schema description coverage, the input schema already provides comprehensive details for all 14 parameters. The description adds minimal semantic value beyond the schema, such as noting the tool 'Supports intents, views, widgets, and full apps via the 'type' parameter,' which is already clear from the schema's enum. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compile a minimal JSON schema directly to Swift, bypassing the TypeScript DSL entirely.' It specifies the verb ('compile'), resource ('JSON schema'), and target language ('Swift'), and distinguishes it from its sibling 'axint.compile' by noting this tool is for 'quick Swift generation without writing TypeScript' while the sibling is for 'complex intents with custom perform() logic.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states: 'Use this for quick Swift generation without writing TypeScript; use axint.compile when you need the full DSL for complex intents with custom perform() logic.' It also mentions the tool is 'ideal for LLM agents optimizing token budgets,' indicating a specific use case context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.session.startAInspect
Start an enforced Axint agent session. Writes .axint/session/current.json plus token-scoped session history, refreshes .axint/AXINT_REHYDRATE.md, returns compact operating memory, docs context, a session token, and the exact axint.workflow.check... Use: call at the start of a tool-enabled agent session or after context compaction. Effects: writes .axint/session and rehydration artifacts; no auth or network required.
| Name | Required | Description | Default |
|---|---|---|---|
| agent | No | Agent target for the session. Defaults to all. | |
| format | No | Output format. Defaults to markdown. | |
| platform | No | Target Apple platform, such as macOS, iOS, visionOS, or all. | |
| targetDir | No | Project directory where .axint/session/current.json and token-scoped session history should be written.... | |
| ttlMinutes | No | How long the session token remains valid. Defaults to 720 minutes. | |
| projectName | No | Project name to embed in the session and returned context. | |
| expectedVersion | No | Expected Axint package version. Defaults to the running MCP version. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description mentions writing a file and returning session data, which aligns with non-read-only annotation. Adds 'enforced' and 'prevents drift', but does not fully detail side effects (e.g., overwrite behavior). Annotations are minimal, so description carries burden; adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each essential. Purpose and usage front-loaded. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, outputs, and usage timing. With 7 optional parameters and no output schema, description is reasonably complete. Term 'compact operating memory' could be clearer but acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for all 7 parameters. Description adds no new parameter-level meaning beyond listing return types. Baseline 3 is appropriate as schema handles semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Start an enforced Axint agent session' and lists outputs. Distinguishes from siblings by referencing axint.workflow.check args and positioning as first tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this as the first Axint tool in Xcode after a new chat, MCP restart, or context compaction', providing clear usage context. Lacks explicit when-not-to-use or alternatives, but strong enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.statusARead-onlyIdempotentInspect
Report the exact running Axint MCP server version, package path, uptime, registered tool count, and same-thread MCP reload/update instructions. Use this as the first tool in a new Codex, Claude, or Xcode agent chat to prove which Axint... Use: call first or after an MCP reload to prove the connected server version; do not use as an npm/PyPI lookup. Effects: read-only; writes no files; no auth or network required.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Output format. markdown is human-readable, json is structured, and prompt is a short instruction an agent... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds specific behavioral context (what information is reported, that it's about the running server, not a guessed version) beyond what annotations provide. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states what it reports, second provides usage guidance. No superfluous words. Information is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the optional parameter and no output schema, the description covers the key purpose and usage. It lacks detail on return format but the format parameter hints at it. Overall sufficient for a status tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single optional parameter 'format' fully described in the schema. The description does not add further parameter semantics beyond listing the output content. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reports specific server details (version, path, uptime, tool count, instructions) and distinguishes it from guessing versions. The verb 'report' plus explicit resource definition makes purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly recommends using it as the first tool in a new Xcode agent chat to verify the connected Axint process. While alternatives aren't named, the context of sibling tools implies differentiation. Could be improved by explicitly stating when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.suggestARead-onlyIdempotentInspect
Suggest Apple-native features for an app based on its description. The domain is only a weak hint; the app description wins. Returns a ranked list of features with recommended surfaces (intent, widget, view,... Use: use before generation to choose Apple surfaces; not a substitute for registry search or validation. Effects: local mode is read-only; Pro mode may call Axint endpoint when credentials are configured.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Suggestion strategy. local is deterministic and offline. pro/ai uses the authenticated Axint Pro... | |
| goals | No | Optional product goals for Pro mode, such as activation, retention, conversion, speed, accessibility, or... | |
| limit | No | Maximum number of suggestions to return. Defaults to 5. Suggestions are ordered by estimated user impact. | |
| stage | No | Optional product stage used by Pro mode to tune suggestions without embedding private strategy logic in the... | |
| domain | No | Primary app domain. One of: messaging, productivity, health, social, community, collaboration,... | |
| exclude | No | Optional concepts to avoid, for example ['dating', 'fitness']. | |
| audience | No | Optional audience context, such as consumers, teams, operators, developers, clinicians, creators, or... | |
| platform | No | Optional Apple platform target used by AI mode to tailor suggestions. | |
| constraints | No | Optional constraints for Pro mode, such as must be macOS-native, no server, no payments, or build in one... | |
| appDescription | Yes | What the app does, in natural language. E.g., 'A fitness tracking app that logs workouts and counts steps'... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context about what the tool returns ('ranked list of features with recommended surfaces, estimated complexity, and one-line description') and explicitly states it has no side effects ('No files written, no network requests, no side effects'), which enhances transparency beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences: the first explains the tool's purpose and output, and the second provides usage guidelines and behavioral constraints. Every sentence adds essential information with zero wasted words, making it highly concise and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and lack of output schema, the description does a good job of explaining the return format and behavioral constraints. However, it could be slightly more complete by explicitly mentioning the tool's limitations or error conditions, though it's largely sufficient for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('suggest Apple-native features for an app'), identifies the resource ('app based on its domain or description'), and distinguishes it from sibling tools by explicitly mentioning when to use it versus 'axint.feature'. It provides a complete picture of what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to discover what Axint can generate for an app before calling axint.feature') and when not to use it ('No files written, no network requests, no side effects'), providing clear alternatives and exclusions. This gives the agent precise guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.swift.fixARead-onlyIdempotentInspect
Auto-fix mechanical Swift errors detected by axint.swift.validate. Handles 20+ fix rules: rewrites @State let → @State var, injects perform() into AppIntents, drops var body stubs into Widgets and Apps, adds let date: Date to TimelineEntry,... Use: use after swift.validate when errors are mechanical; inspect remaining diagnostics manually. Effects: read-only fixed-source output; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| file | No | Optional file name to attach to diagnostics. | |
| format | No | When true (default), pipes the repaired Swift through swift-format with Axint's house style. Falls back to... | |
| source | Yes | Full Swift source code to fix. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the tool handles '20+ fix rules' with examples, mentions it 'Returns the fixed source plus the list of fixes applied', and explicitly states 'Read-only output, no side effects'. While annotations cover readOnlyHint=true and destructiveHint=false, the description elaborates on the output format and scope of fixes, enhancing transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by specific examples of fix rules, output details, and behavioral notes. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (handling 20+ fix rules) and lack of output schema, the description does well by explaining the return values ('fixed source plus the list of fixes applied') and behavioral traits. However, it could benefit from more details on error handling or limitations, such as what happens with non-mechanical errors, to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents both parameters ('file' and 'source') clearly. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining how 'source' should be formatted or when 'file' is useful. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Auto-fix mechanical Swift errors detected by axint.swift.validate' with specific examples of fix rules like '@State let → var' and 'converts nonisolated var → let'. It distinguishes itself from the sibling tool 'axint.swift.validate' by handling fixes rather than just validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying it's for 'mechanical Swift errors detected by axint.swift.validate', indicating when to use it. However, it doesn't explicitly state when not to use it or mention alternatives among siblings like 'axint.suggest' or 'axint.compile', which could be relevant for other types of fixes or compilation issues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.swift.validateARead-onlyIdempotentInspect
Validate existing Swift source against 150 build-time rules (AX700–AX749) including Swift 6 concurrency and Live Activities. Catches bugs Xcode buries behind generic 'type does not conform' errors: missing perform() on AppIntent, missing var... Use: use on generated or edited Swift before build; pair with swift.fix for mechanical repairs. Effects: read-only Swift diagnostics; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| file | No | Optional file name to attach to diagnostics for editor integration. | |
| source | Yes | Full Swift source code to validate. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (readOnlyHint: true, destructiveHint: false, idempotentHint: true), but the description adds valuable context beyond this: it specifies that the tool 'catches bugs Xcode buries' and 'returns JSON array of diagnostics,' with 'empty array means clean.' This clarifies the output format and purpose, though it doesn't detail rate limits or auth needs. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently structured in three sentences: first states what it does, second gives examples of bugs caught, third explains output and behavioral traits. Every sentence adds value without redundancy, making it appropriately sized and zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (validating against 150 rules), rich annotations (covering safety and idempotency), and no output schema, the description is mostly complete: it explains the purpose, usage, output format, and behavioral traits. However, it lacks details on error handling or specific rule categories beyond examples, leaving minor gaps for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('source' as required full Swift code, 'file' as optional file name). The description does not add meaning beyond the schema, as it doesn't explain parameter usage or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('validate existing Swift source') and resources ('against 150 build-time rules'), distinguishing it from siblings like 'axint.swift.fix' (which likely fixes issues) and 'axint.compile' (which compiles code). It explicitly mentions what rules it validates (Swift 6 concurrency, Live Activities) and what bugs it catches, making it highly specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: for validating Swift source code against specific rules, catching bugs that Xcode misses. It distinguishes from alternatives by noting it's 'read-only, no files written, no side effects,' implying it's for diagnostics only, not for fixing issues (which might be handled by 'axint.swift.fix'). This gives clear context for usage versus siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.templates.getARead-onlyIdempotentInspect
Retrieve the full TypeScript source code of a specific bundled template by id. Returns a complete, compilable defineIntent() file as a string — ready to save as .ts and compile with axint.compile. Includes perform() logic, parameter definitions, and... Use: use to fetch a complete reference template; edit before compiling into an app. Effects: read-only template source; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Template id from axint.templates.list, e.g., 'send-message' or 'create-event'. Case-sensitive, kebab-case... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover read-only, non-destructive, idempotent, and closed-world behavior, but the description adds valuable context: it specifies that the output is a complete TypeScript file ready for compilation, returns an error message for invalid IDs, and involves no file writes or network requests. This enhances understanding beyond the annotations without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by clarifying details. Every sentence adds value: the second explains output format and constraints, the third covers error handling, the fourth provides usage guidance, and the fifth distinguishes from siblings. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no output schema), the description is complete: it covers purpose, usage, behavioral traits, and sibling differentiation. With annotations handling safety and idempotency, and the schema fully documenting the parameter, no significant gaps remain for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameter 'id' is fully documented in the schema. The description adds minimal extra meaning by referencing axint.templates.list for valid IDs and noting case-sensitive kebab-case format, but this is largely redundant with the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return the full TypeScript source code') and resource ('bundled reference template by id'), distinguishing it from siblings like axint.scaffold (which generates skeletons) and axint.templates.list (which discovers IDs). It specifies the output format ('complete defineIntent() file') and compilation compatibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Call axint.templates.list first to discover valid ids') and when not to use it ('Unlike axint.scaffold which generates a skeleton'), providing clear alternatives and prerequisites. It also clarifies that no files are written or network requests made, setting expectations appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.templates.listARead-onlyIdempotentInspect
List all 26 bundled reference templates in the Axint SDK. Returns a JSON array of { id, name, description } objects — one per template. Templates cover messaging, productivity, health, finance, commerce, media, navigation, smart-home, and entity/query patterns. No input... Use: use to discover valid template ids before templates.get. Effects: read-only template metadata; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations by specifying 'No parameters, no files written, no network requests, no side effects' and detailing the return format ('array of { id, name, description } objects'). While annotations cover safety (readOnlyHint, destructiveHint), the description enhances understanding of operational constraints and output structure, though it doesn't mention rate limits or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core functionality, uses concise sentences, and every part adds value—from listing behavior to sibling comparisons. There is no wasted text, and the structure efficiently conveys purpose, usage, and distinctions in a compact form.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is complete. It covers purpose, usage guidelines, behavioral traits, and sibling context adequately. The lack of an output schema is compensated by describing the return format, making it sufficient for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is high. The description explicitly states 'No parameters', which reinforces the schema's indication of no inputs. It adds context about the tool's parameterless nature, compensating for the lack of parameters by clarifying usage intent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all bundled reference templates'), resource ('available in the axint SDK'), and distinguishes it from sibling tools by contrasting with 'axint.scaffold' and linking to 'axint.templates.get'. It provides a precise verb+resource combination with explicit sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Use this to discover template ids') and when to use an alternative ('then call axint.templates.get with a specific id'). It also clarifies differences from 'axint.scaffold' by noting that templates are 'complete working examples with perform() logic included', providing clear context and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.tokens.ingestARead-onlyIdempotentInspect
Ingest design tokens from JSON, JS/TS object exports, or CSS variables and return a SwiftUI token enum. Use this before generating Swarm-style views/components so agents can preserve exact brand colors, dimensions, radii, spacing, and typography. No files... Use: use before view/component generation when a design system should be preserved. Effects: read-only Swift token output; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Output format. swift returns the SwiftUI token enum, json returns normalized tokens, markdown returns an... | |
| source | No | Inline token source. Supports JSON objects, JS/TS object exports, and CSS custom properties. | |
| namespace | No | Swift enum namespace to generate. Example: SwarmTokens. Defaults to AxintDesignTokens. | |
| sourcePath | No | Path to a token file such as swarm-tokens.js, tokens.json, or tokens.css. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds 'No files are written,' confirming the read-only nature. It does not elaborate on error handling or limits, but with annotations, the behavior is sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences—no wasted words. The first sentence states the core action, the second provides usage context. Highly concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and usage but does not explain the return value format or structure in detail. Since there is no output schema, the description could be more helpful, but for a simple ingest tool it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and each parameter has a good description. The tool description does not add significant new meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it ingests design tokens from specified formats (JSON, JS/TS, CSS) and returns a SwiftUI token enum. It also explains the usage context (before generating views/components) and distinguishes itself from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells when to use it ('before generating Swarm-style views/components') but does not explicitly mention when not to use or compare with alternatives. However, the context is clear enough for an agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.upgradeBDestructiveInspect
Check the latest Axint package and optionally apply the upgrade while preserving the current agent thread. Returns exact install commands, optional Xcode MCP wiring refresh, .axint/upgrade/latest.*... Use: call when axint.status shows a stale server; not for app dependency upgrades. Effects: destructive when apply=true: can run package installs, refresh Xcode wiring, and write .axint/upgrade; may use npm network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory where .axint/upgrade/latest.* should be written. Defaults to the MCP process cwd. | |
| apply | No | Whether to install the target package. Defaults to false, which only returns the plan. | |
| format | No | Output format. markdown is human-readable, json is structured, and prompt is the continuation block. | |
| writeReport | No | Whether to write .axint/upgrade/latest.json and latest.md. Defaults to true when apply is true. | |
| latestVersion | No | Known latest version to compare against. Useful for deterministic agent tests or offline planning. | |
| targetVersion | No | Specific Axint version to install. Defaults to the latest published npm version. | |
| reinstallXcode | No | Whether apply mode should also refresh optional Xcode MCP wiring. Defaults to false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true, so the agent knows it can cause changes. The description adds that it preserves the current agent thread, which is valuable behavioral context beyond what annotations provide. However, it doesn't detail what exactly gets destroyed or require permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise, but it appears truncated ('Returns exact install...'), suggesting incompleteness. It frontloads the key purpose but lacks a proper structure with full stops.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 7 parameters, destructive behavior, and no output schema, the description is severely incomplete. It fails to explain parameters, return values, or usage context, relying solely on the partial description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must explain parameters. It does not mention any of the 7 parameters (cwd, apply, format, writeReport, latestVersion, targetVersion, reinstallXcode), leaving their semantics completely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks the latest Axint package and optionally applies the upgrade while preserving the agent thread. It uses specific verbs ('check', 'apply', 'preserving') and a clear resource ('Axint package'), distinguishing it from siblings like 'install' or 'release'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for checking and upgrading but does not explicitly state when to use this tool versus alternatives like 'axint.agent.install' or 'axint.agent.release'. No exclusions or when-not-to-use guidance are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.validateARead-onlyIdempotentInspect
Validate a TypeScript intent definition without generating Swift. Runs the full Axint validation pipeline (134 diagnostic rules) and returns a JSON array of diagnostics: { severity: 'error'|'warning', code: 'AXnnn', line: number, column: number,... Use: use for TypeScript DSL diagnostics before Swift output; use swift.validate for existing Swift. Effects: read-only diagnostics; writes no files and uses no network.
| Name | Required | Description | Default |
|---|---|---|---|
| source | Yes | Full TypeScript source code containing a defineIntent() call. Must be a complete file starting with an axint... |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies that validation occurs 'without generating Swift output' and notes the remote endpoint limitation (using axint.schema.compile instead). Annotations already cover safety (readOnly, non-destructive, idempotent), so the description appropriately focuses on operational constraints rather than repeating those traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: the first sentence states the core purpose, and subsequent sentences add only essential qualifications and alternatives. Every sentence earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (validation without generation), rich annotations (covering safety and idempotence), and full schema coverage, the description is nearly complete. It lacks output details (no output schema), but for a validation tool, the return format might be implied as success/error. The endpoint-specific guidance compensates well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'source' parameter. The description adds minimal param-specific info, only noting it's 'Same format accepted by axint.compile.' This provides some cross-tool consistency but doesn't significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Validate a TypeScript intent definition without generating Swift output.' It specifies the verb (validate), resource (TypeScript intent definition), and distinguishes it from sibling tools by noting it doesn't generate Swift output, unlike axint.compile which likely does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs alternatives: 'on this remote endpoint, use axint.schema.compile for validation' and 'Full TS validation available via CLI.' It clearly directs users to axint.schema.compile for this endpoint and mentions CLI as another option, helping avoid misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.workflow.checkARead-onlyIdempotentInspect
Read-only agent workflow gate. Requires the current Axint session token from axint.session.start unless requireSession=false is explicitly set. Use this at session start, after context compaction, before planning, writing, building, or... Use: use at stage gates to prove Axint workflow coverage; not a final build/test substitute. Effects: read-only gate but may update tiny workflow freshness stamps; no network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory containing .axint/session/current.json. Defaults to the MCP process cwd. | |
| agent | No | Agent host/tool lane for this gate. Codex/Claude/Cowork/Cursor use patch-first lanes; Xcode may use Xcode... | |
| notes | No | Optional human/agent context for why a step was skipped. | |
| stage | No | Workflow stage being checked. Defaults to pre-build. | |
| format | No | Output format. Defaults to markdown. | |
| surfaces | No | Apple surfaces touched by this task. If omitted, inferred from modifiedFiles. | |
| ranRepair | No | Whether axint.repair was used for an existing-code repair plan. This satisfies planning for patch-first... | |
| ranStatus | No | Whether axint.status was called to confirm the running MCP version. | |
| ranFeature | No | Whether axint.feature was used for a new surface scaffold. | |
| ranSuggest | No | Whether axint.suggest was used during planning. | |
| sessionToken | No | Token returned by axint.session.start. Required by default so compaction cannot erase the Axint workflow... | |
| modifiedFiles | No | Files changed in this agent pass, used to infer whether Swift validation is required. | |
| ranCloudCheck | No | Whether axint.cloud.check was run with source/evidence. | |
| availableTools | No | Optional list of Axint MCP tools visible in this host session. When supplied, workflow.check will not... | |
| requireSession | No | Set false only for legacy/manual checks. Defaults to true. | |
| sessionStarted | No | Whether axint.session.start was called in this chat/recovery pass. | |
| readDocsContext | No | Whether .axint/AXINT_DOCS_CONTEXT.md was read or axint.context.docs was called after a new chat or context... | |
| ranSwiftValidate | No | Whether axint.swift.validate was run on modified Swift. | |
| xcodeBuildPassed | No | Whether Xcode build evidence passed. | |
| xcodeTestsPassed | No | Whether focused unit/UI tests passed. | |
| featureBypassReason | No | Concrete reason axint.feature was intentionally bypassed. Use for existing-code edits, patch-first repairs,... | |
| readAgentInstructions | No | Whether AGENTS.md, CLAUDE.md, or .axint/project.json was read after a new chat or context compaction. | |
| readRehydrationContext | No | Whether .axint/AXINT_REHYDRATE.md was read after a new chat, context compaction, MCP restart, or drift. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds significant behavioral context beyond annotations: it requires a session token unless requireSession=false, explains the gate logic, and details what checks are performed (e.g., validation of tools used). No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that is front-loaded with the purpose. It is dense but not overly long. All sentences contribute meaning. However, it could be slightly more structured (e.g., bullet points) for easier scanning, but it remains effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (18 parameters, no output schema), the description provides a comprehensive overview of the tool's role in the workflow. It explains prerequisites (session token), usage context, and the types of checks performed. It doesn't detail return values, but as a gate tool the outputs are likely standard. Annotations compensate for safety. Adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented. The description adds value by explaining the workflow stage parameter and the meaning of ran* booleans in the context of the gate. It provides semantics beyond the schema's individual descriptions, justifying a score above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Read-only agent workflow gate' which is a specific verb+resource. It distinguishes itself from sibling tools by naming them explicitly (suggest, feature, swift.validate, cloud.check) and their roles. The purpose is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use the tool: 'at session start, after context compaction, before planning, writing, building, or committing.' It also provides alternatives by listing other tools for specific actions, guiding the agent on when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.xcode.guardCInspect
Guard an Xcode agent session against context compaction and Axint drift. Checks project memory files, active Axint session, latest Axint Run or guard proof, and long-task freshness. Writes... Use: call around long Xcode tasks, context recovery, broad Swift edits, or before claiming runtime proof. Effects: writes .axint/guard proof and may start a session; does not edit app source or use network.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project directory to guard. Defaults to the MCP process cwd. | |
| notes | No | Agent/user notes to scan for compaction, drift, forgotten Axint usage, or long-task risk. | |
| stage | No | Current Xcode workflow stage. Defaults to context-recovery. | |
| format | No | Output format. Defaults to markdown. | |
| platform | No | Target Apple platform, such as macOS, iOS, visionOS, or all. | |
| projectName | No | Project name for the guard report. | |
| writeReport | No | Whether to write .axint/guard/latest.json and latest.md. Defaults to true. | |
| sessionToken | No | Current axint.session.start token, if already known. | |
| lastAxintTool | No | Last Axint tool the agent used, e.g. axint.suggest or axint.feature. | |
| modifiedFiles | No | Files in scope for this task. | |
| expectedVersion | No | Expected Axint version for the active project. | |
| lastAxintResult | No | Short result from the last Axint tool call. | |
| autoStartSession | No | Whether to start axint.session.start automatically if no active session exists. Defaults to true. | |
| maxMinutesSinceAxint | No | Maximum allowed minutes since latest Axint evidence. Defaults to 10. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description says 'checks' implying read-only behavior, but annotations have readOnlyHint=false, indicating mutation. This contradiction undermines transparency. No other behavioral traits disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single incomplete sentence (truncated). It is under-specified, not concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 14 parameters, no output schema, and a truncated description, the tool definition is severely incomplete for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides almost no parameter explanation (only vague references to 'project memory files' etc.). The 14 parameters remain undocumented in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it 'guards an Xcode agent session against context compaction and Axint drift' and checks specific resources, making the purpose clear despite being truncated. It distinguishes from siblings by being the only 'guard' tool among many Axint tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives or when not to use it. The description lacks context for appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
axint.xcode.writeCInspect
Write a file inside the Xcode project through the Axint guard path. For Swift files, runs axint.swift.validate and axint.cloud.check immediately, then records .axint/guard/latest.* proof. Use... Use: use only for guarded Xcode-project file writes; outside Xcode, patch normally and validate after. Effects: writes the requested file inside cwd, may create dirs, validates Swift, and may write guard/check artifacts.
| Name | Required | Description | Default |
|---|---|---|---|
| cwd | No | Project root. Defaults to the MCP process cwd. | |
| path | Yes | File path to write. Relative paths are resolved inside cwd; absolute paths must still be inside cwd. | |
| notes | No | Agent notes or user feedback to scan for drift while writing. | |
| format | No | Output format. Defaults to markdown. | |
| content | Yes | Full file contents to write. | |
| platform | No | Target Apple platform for Cloud Check. | |
| cloudCheck | No | Whether to run Cloud Check for .swift files. Defaults to true. | |
| createDirs | No | Whether to create parent directories before writing. Defaults to true. | |
| projectName | No | Project name for guard/session reports. | |
| sessionToken | No | Current axint.session.start token, if already known. | |
| validateSwift | No | Whether to run Swift validation for .swift files. Defaults to true. | |
| expectedVersion | No | Expected Axint version for this project. |
Output Schema
| Name | Required | Description |
|---|---|---|
| text | Yes | Primary Axint tool response text, matching the first text content block. |
| isError | No | Whether Axint marked the tool response as an error. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide destructiveHint=false, but writing is inherently destructive. Description adds that for Swift files it runs validation and cloud checks, which is useful beyond annotations. However, it does not disclose behavior on failure, overwriting, or permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise but omits crucial information. It is not as rich as needed for a complex tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 parameters, no output schema, and no description of return values or side effects, the description is incomplete for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the 12 parameters (path, content, etc.), leaving the agent with no additional meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Write' and resource 'file inside the Xcode project' with additional context for Swift files. However, 'through the Axint guard path' is vague and does not differentiate from siblings like axint.repair or axint.compile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Does not mention when not to use or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.