coderegistry
Server Details
Enterprise code intelligence for M&A, security audits, and tech debt. Hosted server with 200k free.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- The-Code-Registry/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 16 of 16 tools scored.
Most tools have distinct purposes targeting specific resources (accounts, projects, vaults) with clear CRUD operations. However, some potential confusion exists between get_vault, get-code-vault-summary, get-code-vault-results, and get-code-vault-reports, as they all retrieve vault information but at different detail levels. The descriptions help differentiate them, but an agent might need to carefully parse which tool provides the exact data needed.
Naming conventions are highly inconsistent. There's a mix of snake_case (create_account, list_projects) and kebab-case (create-code-vault, get-code-vault-reports), with no discernible pattern. Some tools use verbs like 'create' and 'list', while others use 'rotate' and 'reanalyze', leading to a chaotic overall naming scheme that lacks predictability.
With 16 tools, the count is slightly high but reasonable for a code registry domain covering accounts, projects, and vaults with analysis capabilities. It includes core CRUD operations and specialized analysis tools, suggesting a well-scoped but comprehensive surface. It doesn't feel overly bloated or thin for the apparent scope.
The tool set provides complete coverage for the code registry domain. It includes full CRUD for accounts, projects, and vaults, along with specialized operations like analysis, reanalysis, reporting, and API key management. There are no obvious gaps; agents can manage the entire lifecycle from account creation to code analysis and deletion.
Available Tools
16 toolscreate_accountAInspect
Creates a new Code Registry account and returns API credentials. Cold starts can cause the first request to time out; retry with backoff. No API key required.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Account owner's name. | |
| Yes | Account owner's email. | ||
| team_name | Yes | Team or company name. | |
| integrator_id | No | Optional integrator identifier (used for API key scoping). | |
| integrator_name | No | Optional integrator display name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a write operation ('Creates'), includes authentication details ('No API key required'), and addresses performance considerations ('Cold starts... time out; retry with backoff'). This covers mutation behavior, prerequisites, and error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by two concise operational notes. Each sentence adds value: the first defines the tool, the second addresses performance, and the third clarifies authentication. There is no wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (account creation with credentials) and lack of annotations or output schema, the description is largely complete. It covers purpose, behavioral traits, and usage context. However, it does not detail the format of returned credentials (e.g., key structure) or error scenarios beyond timeouts, leaving minor gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters documented in the schema (e.g., 'Account owner's name' for 'name'). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining interactions between fields. The baseline score of 3 reflects adequate but not enhanced parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Creates a new Code Registry account') and the outcome ('returns API credentials'). It distinguishes this tool from siblings like 'get_account' (read) and 'delete_account' (delete) by focusing on account creation with credential generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by mentioning 'No API key required' (prerequisite) and 'Cold starts can cause the first request to time out; retry with backoff' (operational guidance). However, it does not explicitly state when to use this tool versus alternatives like 'rotate_api_key' for existing accounts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create-code-vaultAInspect
Creates a code vault and queues the initial analysis. IMPORTANT: Always use LOCAL_AGENT (or AGENT) unless the user explicitly requests remote analysis or Docker is unavailable. LOCAL_AGENT runs locally via Docker; source code stays on the machine and only anonymous analysis data is uploaded. A Git repo is only required if you want Git-based insights (git-history, code-contributors). If Git insights are needed, the mounted path must be a standard Git repo containing a .git directory (not a .git file pointing to a worktree). The response includes next_steps.commands that must be executed to start the local agent. Use GIT/FILE_ARCHIVE only when the user provides a reachable URL plus read credentials (username/password or token). Cold starts can cause the first request to time out; retry with backoff. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Vault name. | |
| branch | No | Optional source branch name (GIT only). | |
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| user_id | No | User id (from create_account or get_account). | |
| password | No | Source password or personal access token (required for private GIT repos; must have read access). | |
| username | No | Source username (required for private GIT repos; must have read access). | |
| project_id | No | Project id (from create_project). | |
| source_url | No | Repository or archive URL (required for GIT/FILE_ARCHIVE; ignored for LOCAL_AGENT/AGENT). Must be reachable by the platform. FILE_ARCHIVE accepts .zip/.tar URLs and supported Google Drive/Dropbox links. | |
| description | No | Optional vault description. | |
| source_type | No | Code source type. ALWAYS use LOCAL_AGENT (or AGENT) unless the user explicitly requests remote analysis or Docker is unavailable. LOCAL_AGENT runs locally via Docker; source code stays on the machine and only anonymous analysis data is uploaded. A Git repo is only required for Git-based insights (git-history, code-contributors). If you need those, ensure the mounted path is a standard Git repo containing a .git directory (not a .git file pointing to a worktree). Use GIT/FILE_ARCHIVE only when the user provides a reachable URL plus read credentials (username/password or token). Do NOT attempt GIT without credentials. | LOCAL_AGENT |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and delivers comprehensive behavioral details: explains that LOCAL_AGENT runs locally via Docker with source code staying on machine, describes authentication requirements (X-API-Key or api_key parameter), mentions cold start timeouts with retry advice, specifies that response includes next_steps.commands to execute, and details Git-specific requirements for insights.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense but could be better structured. It front-loads the core purpose but then mixes usage guidelines, behavioral details, and parameter semantics in a somewhat dense paragraph. While every sentence adds value, the organization could be improved for readability, and some sentences are quite long with multiple clauses.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 10 parameters, no annotations, and no output schema, the description does an excellent job covering behavioral aspects, usage guidelines, and parameter semantics. The main gap is the lack of information about return values or what 'queues the initial analysis' means in practice, which would be helpful given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds significant value by explaining the semantic meaning of source_type choices: when to use LOCAL_AGENT vs GIT/FILE_ARCHIVE, the implications of each choice (local vs remote analysis, Git insights requirements), and credential requirements. However, it doesn't provide additional context for other parameters beyond what the schema already documents well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Creates a code vault and queues the initial analysis' - a specific verb ('creates') with resource ('code vault') and action ('queues initial analysis'). It distinguishes from siblings like 'get-code-vault-summary' or 'reanalyze-code-vault' by focusing on creation rather than retrieval or reanalysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Always use LOCAL_AGENT (or AGENT) unless the user explicitly requests remote analysis or Docker is unavailable' and 'Use GIT/FILE_ARCHIVE only when the user provides a reachable URL plus read credentials.' It also mentions prerequisites like API key requirements and Git repo requirements for specific insights.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_projectAInspect
Creates a project for a team user. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Project name. | |
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| user_id | Yes | User id (from create_account or get_account). | |
| description | No | Optional project description. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains the authentication mechanism (X-API-Key or api_key parameter), mentions where users can obtain API keys, and addresses client compatibility issues. However, it doesn't cover potential side effects, error conditions, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief (two sentences) and front-loaded with the core purpose. The second sentence efficiently addresses authentication implementation details without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description provides adequate but incomplete context. It covers authentication well but lacks information about what happens after creation, potential constraints, or error scenarios that would be important for an agent to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 4 parameters thoroughly. The description adds marginal value by explaining the api_key parameter's purpose as an alternative to headers, but doesn't provide additional semantic context beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Creates a project') and the target ('for a team user'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate this tool from its sibling 'create_account' or other creation tools, which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning authentication requirements and header alternatives, but it doesn't explicitly state when to use this tool versus alternatives like 'create_account' or 'get_project'. The guidance is helpful but incomplete for sibling tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_accountAInspect
Deletes the team account plus all projects and vaults. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| confirm | Yes | Set true to confirm deletion. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the destructive nature ('Deletes the team account plus all projects and vaults'), authentication requirements (API key), and implementation details for clients without header support. However, it lacks information on rate limits, error handling, or confirmation of deletion success.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and scope, followed by essential authentication and implementation details in two concise sentences. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high-stakes nature of this destructive tool with no annotations or output schema, the description is reasonably complete. It covers the action, scope, authentication, and a workaround for header limitations. However, it could improve by mentioning irreversible consequences or expected response formats to fully guide the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds context by explaining the purpose of the api_key parameter for clients that cannot set headers, but it does not provide additional meaning beyond what the schema specifies for the confirm parameter. Baseline 3 is appropriate as the schema handles most documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Deletes') and the target resource ('the team account plus all projects and vaults'), making the purpose explicit. It distinguishes this tool from siblings like delete_project or delete-code-vault by specifying the broader scope of deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool by mentioning the requirement for an API key and how to provide it, but it does not explicitly state when to use alternatives like delete_project or delete-code-vault for more targeted deletions. It offers practical guidance without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete-code-vaultAInspect
Deletes a code vault and related analysis data. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| vault_id | Yes | Vault id (from create-code-vault or list_vaults). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that this is a destructive operation (deletes vault and analysis data) and specifies authentication requirements (API key). However, it doesn't mention whether deletion is permanent/irreversible, rate limits, error conditions, or what happens to associated resources. It adds some behavioral context but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three concise sentences that each serve a purpose: stating the action, specifying authentication, and providing alternative parameter usage. It's front-loaded with the core purpose. Minor improvement possible by combining sentences about authentication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description provides basic purpose and authentication context but lacks important details about the deletion's permanence, confirmation requirements, error responses, or what 'related analysis data' specifically includes. Given the tool's destructive nature, more behavioral transparency would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by explaining the api_key parameter's purpose ('for clients that cannot set X-API-Key headers'), but doesn't provide additional semantic context beyond what's in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Deletes') and target resource ('a code vault and related analysis data'), distinguishing it from sibling tools like delete_account or delete_project. It precisely identifies what gets removed beyond just the vault itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to delete a code vault) and mentions authentication requirements, but doesn't explicitly differentiate when to use it versus alternatives like delete_account or delete_project. It lacks explicit 'when-not' guidance or comparison to similar deletion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_projectAInspect
Deletes a project and its vaults. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| project_id | Yes | Project id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses authentication needs and a workaround for header limitations, which is valuable. However, it doesn't mention that deletion is destructive/permanent, rate limits, or what happens to dependent resources beyond vaults, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action, followed by authentication details, in two efficient sentences. It could be slightly more concise by integrating the header workaround more smoothly, but overall it's well-structured with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description covers authentication well but lacks critical details like confirmation of deletion, error handling, or return values. Given the complexity, it's adequate but has clear gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters well. The description adds context for 'api_key' as an alternative to headers, but doesn't provide additional meaning beyond what the schema specifies for 'project_id'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Deletes') and resources ('a project and its vaults'), making the purpose unambiguous. It distinguishes from siblings like 'delete_account' and 'delete-code-vault' by specifying the scope of deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about authentication requirements (X-API-Key or api_key parameter), which helps determine when to use this tool. However, it doesn't explicitly state when to choose this over alternatives like 'delete_account' or warn about irreversible consequences.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_accountAInspect
Returns the team owner account information. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality by specifying authentication needs (X-API-Key or api_key) and handling for clients without header support. However, it doesn't cover other behavioral traits like rate limits, error handling, or response format, which could be useful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that are front-loaded: the first states the purpose, and the second adds necessary authentication context. There's minimal waste, though it could be slightly more structured by separating usage instructions more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (authentication-focused read operation), no annotations, and no output schema, the description is somewhat complete but has gaps. It covers authentication and basic purpose but lacks details on return values, error cases, or prerequisites beyond auth, making it adequate but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the single optional parameter (api_key) with examples. The description adds marginal value by explaining the parameter's purpose (for clients that cannot set headers), but doesn't provide additional syntax or format details beyond what the schema offers, aligning with the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('team owner account information'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_project' or 'get_vault', which likely return different resource types, so it doesn't fully achieve sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage by mentioning authentication requirements (X-API-Key or api_key argument), but it doesn't explicitly state when to use this tool versus alternatives like 'list_projects' or other get_* tools. No clear exclusions or direct alternatives are named, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-code-vault-reportsAInspect
Returns report URLs (snapshot/comparison) for a vault. Completion rules: if version is 1.0.0, snapshot indicates completion and comparison is null; for versions above 1.0.0, comparison indicates completion. If not ready, retry with exponential backoff (5s, 10s, 20s, 40s, max 60s). This endpoint always returns the latest version only; once reanalysis starts, prior versions are no longer accessible here. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| vault_id | Yes | Vault id (from create-code-vault or list_vaults). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explains completion rules based on version numbers, retry behavior with specific backoff timing, version accessibility limitations, authentication requirements, and header/parameter alternatives. This provides rich behavioral context beyond what a basic schema would indicate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core functionality. Each sentence adds important information about behavior, authentication, or implementation details. While somewhat dense, there's minimal wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides substantial behavioral context about completion rules, retry logic, version handling, and authentication. The main gap is the lack of information about what the returned report URLs actually contain or their format, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds some context about the api_key parameter (explaining it's for clients that can't set headers), but doesn't provide additional meaning beyond what the schema already specifies for the vault_id parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns report URLs (snapshot/comparison) for a vault, which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get-code-vault-results' or 'get-code-vault-summary', which likely provide different types of vault information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to get report URLs for a vault) and includes important behavioral guidance about retry logic and version handling. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-code-vault-resultsAInspect
Returns full facet results and AI insights for a vault. Analysis is async; if status is 'processing', poll with exponential backoff (5s, 10s, 20s, 40s, max 60s). Analysis can be as quick as 20-30 minutes for under 500,000 lines of code. Larger codebases can take much longer, especially with the security scan. Facet meanings are documented in resources://docs/facets; AI Quotient is a code-quality metric (not AI-generated code). AI insights can take a few minutes after analysis completes; if ai_insights is empty, poll again and check ai_insights_status per facet (ready/processing/not_available). This endpoint always returns the latest version only; once reanalysis starts, prior versions are no longer accessible here. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| vault_id | Yes | Vault id (from create-code-vault or list_vaults). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels in disclosing behavioral traits. It details async processing with polling strategies (exponential backoff), timing estimates (20-30 minutes for under 500k lines), versioning constraints (latest only, prior versions inaccessible), and authentication requirements (X-API-Key or api_key parameter). It also explains AI insights availability and facet documentation sources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Most sentences earn their place by adding critical behavioral details (e.g., polling, timing, versioning). However, it could be slightly more structured, with some run-on sentences that mix authentication and processing details, reducing readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (async processing, multiple behavioral aspects) and lack of annotations/output schema, the description is highly complete. It covers purpose, usage, timing, polling, versioning, authentication, and output nuances (AI insights, facet meanings). No significant gaps remain for an agent to understand and invoke the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (vault_id and api_key) adequately. The description adds minimal value beyond the schema: it clarifies api_key usage 'if headers aren't supported,' but does not provide additional meaning for vault_id or explain parameter interactions. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'returns full facet results and AI insights for a vault,' specifying the verb ('returns'), resource ('vault'), and output type ('facet results and AI insights'). It distinguishes from siblings like 'get-code-vault-summary' (summary vs. full results) and 'get-code-vault-reports' (reports vs. insights), though not explicitly named. However, it could be more precise about what 'full facet results' entail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for retrieving analysis results after async processing, with guidance on polling if status is 'processing.' It implies usage after vault creation or reanalysis, but does not explicitly state when to use alternatives like 'get-code-vault-summary' for summaries or 'get-code-vault-reports' for reports, nor does it list exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-code-vault-summaryAInspect
Returns the latest version/status info for a vault. Analysis is async; if status is 'processing', poll with exponential backoff (5s, 10s, 20s, 40s, max 60s). Analysis can be as quick as 20-30 minutes for under 500,000 lines of code. Larger codebases can take much longer, especially with the security scan. This endpoint always returns the latest version only; once reanalysis starts, prior versions are no longer accessible here. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| vault_id | Yes | Vault id (from create-code-vault or list_vaults). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels at disclosing behavioral traits: async processing nature, polling strategy with specific backoff intervals, typical processing times (20-30 minutes for under 500k lines), authentication requirements (X-API-Key or api_key parameter), and the limitation that only latest version is accessible. This provides comprehensive operational context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with core functionality. Each sentence adds important operational information (polling strategy, timing expectations, version limitations, authentication). Minor redundancy exists in explaining api_key usage in both authentication and parameter contexts, but overall structure is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides excellent context about behavior, timing, authentication, and limitations. It compensates well for missing structured fields. The only gap is lack of information about return format/structure, but given the tool's purpose (status/summary) and comprehensive behavioral disclosure, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing solid baseline documentation for both parameters. The description adds value by explaining the api_key parameter's purpose ('for clients that cannot set X-API-Key headers') and mentioning authentication requirements, but doesn't provide additional semantic context for vault_id beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns the latest version/status info for a vault' with a specific verb and resource. It distinguishes from siblings like 'get-code-vault-reports' and 'get-code-vault-results' by focusing on summary/status rather than detailed outputs. However, it doesn't explicitly contrast with 'get_vault' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: for checking latest version/status, with specific instructions for async polling when status is 'processing'. It also implicitly suggests alternatives by mentioning this endpoint 'always returns the latest version only' and that 'prior versions are no longer accessible here', guiding users to other endpoints for historical data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_projectAInspect
Returns a specific project by id. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| project_id | Yes | Project id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the authentication requirement (API key via header or argument), which is crucial for a read operation. However, it doesn't mention error conditions, rate limits, or response format, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that each serve a clear purpose: stating the tool's function and explaining authentication options. It's front-loaded with the core purpose. The only minor inefficiency is the parenthetical API key generation note, which could be slightly trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with 2 parameters and no output schema, the description covers authentication well but lacks information about return values, error handling, or what constitutes a valid project_id. Given the complexity is moderate and there are no annotations, the description should ideally mention at least the expected response structure or common failure cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by explaining the alternative authentication method (api_key parameter when headers aren't supported), but doesn't provide additional semantic context beyond what's in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('a specific project by id'), making the purpose unambiguous. It distinguishes from siblings like 'list_projects' by specifying retrieval of a single project rather than listing multiple. However, it doesn't explicitly contrast with other get_* tools beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning the required 'project_id' parameter and authentication context, but doesn't explicitly state when to use this tool versus alternatives like 'list_projects' for browsing or other get_* tools for different resources. It provides some context about authentication methods but lacks clear when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vaultAInspect
Returns a specific vault by id. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| vault_id | Yes | Vault id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements (X-API-Key or api_key parameter) and fallback behavior for clients without header support. It doesn't mention rate limits, error responses, or data format, but covers essential operational context for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that efficiently cover purpose and authentication details with zero waste. The first sentence states the core function, and the second handles authentication nuances. It's appropriately front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with no annotations and no output schema, the description adequately covers authentication and basic purpose but lacks information about return format, error conditions, or relationship to sibling tools. Given the 2-parameter complexity and missing output schema, it should ideally describe what a 'vault' contains or the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds marginal value by explaining the api_key parameter's purpose as a fallback for header limitations, but doesn't provide additional semantic context beyond what the schema already specifies about vault_id or api_key usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('a specific vault by id'), making the purpose immediately understandable. It distinguishes from sibling 'list_vaults' by specifying retrieval of a single vault rather than listing multiple. However, it doesn't explicitly contrast with 'get-code-vault' tools, which might handle different vault types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need a specific vault identified by ID, but doesn't explicitly state when to use this versus alternatives like 'list_vaults' for browsing or 'get-code-vault-*' tools for code-specific vaults. It provides authentication guidance but lacks clear differentiation from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_projectsAInspect
Lists all projects for the authenticated team. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements (X-API-Key or api_key parameter) and clarifying header vs parameter usage for different client capabilities. It also implies read-only behavior through 'Lists' and mentions the team scope. However, it doesn't address potential rate limits, pagination, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that each serve clear purposes: first stating the tool's function, second providing authentication guidance. It's front-loaded with the core purpose. Minor improvement could be made by combining authentication details more succinctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a listing tool with no annotations and no output schema, the description covers authentication and basic functionality adequately. However, it lacks information about return format, pagination, sorting options, or error responses. Given the simplicity of the tool (1 optional parameter) and clear purpose, it's minimally complete but could be more comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the schema itself documenting the optional api_key parameter. The description adds value by explaining the semantic purpose of the api_key parameter ('for clients that cannot set X-API-Key headers') and providing authentication context. This goes beyond the schema's technical description, earning a baseline 3 with some added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Lists' and resource 'all projects for the authenticated team', making the purpose specific. It distinguishes from siblings like 'get_project' (singular) and 'create_project' (write operation). However, it doesn't explicitly differentiate from 'list_vaults' which serves a similar listing function for a different resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context by mentioning authentication requirements (X-API-Key or api_key parameter) and noting it's for listing projects. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_project' (for single project) or 'list_vaults' (for different resource type), nor does it provide exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_vaultsAInspect
Lists vaults within a project. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| project_id | Yes | Project id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes authentication requirements (API key via header or argument), which is crucial for a tool that likely requires authorization. It also hints at client compatibility issues ('if headers aren't supported'), adding practical context. However, it doesn't cover aspects like rate limits, pagination, or error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by authentication details. Both sentences are necessary—the first defines the tool, and the second provides critical implementation guidance. It avoids fluff, though the authentication explanation is slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does a decent job covering authentication and basic usage. However, for a list operation, it lacks details on return format (e.g., array of vault objects), pagination, or error handling. The context signals indicate moderate complexity (2 parameters, no nested objects), but the description leaves gaps in behavioral expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (project_id and api_key). The description adds marginal value by explaining the api_key's purpose for clients without header support, but doesn't provide additional meaning beyond what the schema offers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Lists') and resource ('vaults within a project'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_vault' (which likely retrieves a single vault) or 'list_projects' (which lists projects instead of vaults), preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying it lists vaults 'within a project', suggesting it should be used when you have a project ID and want to see its vaults. However, it lacks explicit guidance on when to use this versus alternatives like 'get_vault' or 'list_projects', and doesn't mention prerequisites beyond authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reanalyze-code-vaultAInspect
Creates a new analysis version for an existing code vault using its existing source settings. For LOCAL_AGENT, the response includes next_steps.commands and the local agent must be run again. For GIT/FILE_ARCHIVE, the re-analysis of the original code source is queued automatically. Note: summary/results/report tools always return the latest version only, so reanalysis replaces access to prior version data. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| author | No | Optional author name override. | |
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| comment | No | Optional version comment. | |
| user_id | No | Optional user id for attribution (from create_account or get_account). | |
| vault_id | Yes | Vault id (from create-code-vault or list_vaults). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes: 1) Different processing flows for LOCAL_AGENT vs GIT/FILE_ARCHIVE sources, 2) That reanalysis replaces access to prior version data, 3) Authentication requirements (X-API-Key or api_key parameter), and 4) Response structure for LOCAL_AGENT. It doesn't mention rate limits, error conditions, or time estimates for processing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Each sentence adds important information: processing differences, version replacement effect, and authentication requirements. While efficient, the authentication explanation could be slightly more concise, and the structure could better separate behavioral details from authentication instructions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides good behavioral context but has gaps. It explains authentication, processing differences, and version replacement, but doesn't describe the response format (beyond LOCAL_AGENT's next_steps.commands), error conditions, or success indicators. Given the complexity of reanalysis operations, more complete behavioral disclosure would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it mentions the api_key parameter as an alternative to headers, but doesn't provide additional context about vault_id selection or other parameters. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Creates a new analysis version') on a specific resource ('existing code vault') using specific settings ('existing source settings'). It distinguishes from sibling tools like 'create-code-vault' (which creates new vaults) and 'get-code-vault-summary' (which retrieves existing data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: when re-analyzing an existing code vault rather than creating a new one. It mentions that summary/results/report tools only return the latest version, establishing the replacement effect. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rotate_api_keyAInspect
Issues a fresh integrator API key. Requires X-API-Key (existing users can generate an API key in the web app). If headers aren't supported, pass api_key in arguments.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Optional API key for clients that cannot set X-API-Key headers. | |
| integrator_id | No | Optional integrator identifier (defaults to existing integrator or 'default'). | |
| integrator_name | No | Optional integrator display name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully explains authentication requirements and header/parameter alternatives, which are important behavioral aspects. However, it doesn't mention whether this operation is idempotent, what happens to the old key, rate limits, or error conditions - leaving significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that each serve distinct purposes: the first states the core function, the second provides implementation guidance. There's no wasted text, though it could be slightly more front-loaded by mentioning the optional parameters earlier.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides adequate but incomplete context. It covers authentication and input alternatives well, but lacks information about what the tool returns, whether the operation is reversible, permission requirements, or error handling - important gaps for an API key rotation operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all three parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it only mentions the api_key parameter in the context of header alternatives. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Issues a fresh integrator API key') and the resource ('integrator API key'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this from sibling tools like 'create_account' or other key-related operations that might exist in the broader system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Issues a fresh integrator API key') and includes practical guidance about authentication methods (X-API-Key header vs. api_key parameter). However, it doesn't specify when NOT to use it or mention alternatives like whether existing keys remain valid after rotation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!