Skip to main content
Glama

Server Details

Focused MCP server for OpenAI image/audio generation (v2.0.0). Wraps endpoints via HAPI CLI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
la-rebelion/hapimcp
GitHub Stars
7

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.7/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose with no ambiguity. The image tools (createImage, createImageEdit, createImageVariation) target different image operations, while the audio tools (createTranscription, createTranslation) handle distinct audio processing tasks. The moderation, model management, and listing tools each serve unique functions without overlap.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with 'create', 'delete', 'list', or 'retrieve' as the verb and a specific noun (e.g., Image, Transcription, Model). The naming is uniform throughout, using camelCase consistently without any deviations or mixed conventions.

Tool Count5/5

With 9 tools, the count is well-scoped for an OpenAI server covering image generation, audio processing, moderation, and model management. Each tool earns its place by addressing a specific, necessary operation in the domain, avoiding both bloat and thin coverage.

Completeness4/5

The tool set provides strong coverage for image, audio, moderation, and model operations, but has minor gaps. For example, there is no tool for updating models or handling other OpenAI services like chat completions, which agents might need to work around. However, core workflows are well-supported with no dead ends.

Available Tools

9 tools
createImageCInspect

Creates an image given a prompt.

ParametersJSON Schema
NameRequiredDescriptionDefault
createImageBodyYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Creates an image' which implies a write/mutation operation, but doesn't disclose any behavioral traits such as rate limits, authentication needs (implied by x-hapi-auth-state in schema but not described), cost implications, or what happens on failure. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just one sentence, 'Creates an image given a prompt.' It's front-loaded and wastes no words, making it easy to parse quickly. However, this conciseness comes at the cost of completeness, as noted in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters with nested objects, no output schema, and no annotations), the description is incomplete. It doesn't explain what the tool returns (e.g., image URLs or base64 data), how to handle the nested parameters, or any error conditions. For a tool that likely involves AI image generation with multiple options, this leaves too much unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the parameters (createImageBody and x-hapi-auth-state) have descriptions in the schema. The description 'Creates an image given a prompt' only hints at the 'prompt' parameter but doesn't explain the semantics of any parameters, including nested ones like n, size, user, or response_format. It fails to compensate for the lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Creates an image given a prompt' clearly states the verb ('creates') and resource ('image'), but it's vague about what kind of image creation this is (e.g., AI-generated vs. other types). It doesn't distinguish from siblings like createImageEdit or createImageVariation, which also create images but with different inputs or methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't explain that this is for generating new images from text prompts, as opposed to createImageEdit (editing existing images) or createImageVariation (creating variations of existing images). The description lacks any context about prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createImageEditCInspect

Creates an edited or extended image given an original image and a prompt.

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
createImageEditBodyYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool creates edited images but lacks details on permissions, rate limits, costs, or response format. For a mutation tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without waste. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (image editing with multiple parameters), lack of annotations, no output schema, and low schema description coverage, the description is insufficient. It doesn't cover behavioral aspects, parameter details, or output expectations, leaving critical gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'original image and a prompt' but doesn't explain parameters beyond that. With 0% schema description coverage and 2 parameters (one required), the description fails to add meaningful context about inputs like image format, mask usage, or optional parameters. It doesn't compensate for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Creates an edited or extended image given an original image and a prompt.' It specifies the verb ('creates'), resource ('edited or extended image'), and required inputs. However, it doesn't explicitly differentiate from sibling tools like createImage or createImageVariation, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like createImage (for generation from scratch) or createImageVariation (for creating variations), nor does it specify prerequisites or exclusions. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createImageVariationCInspect

Creates a variation of a given image.

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
createImageVariationBodyYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool creates variations, implying a generative/mutation operation, but fails to mention critical details like rate limits, authentication needs (implied by 'x-hapi-auth-state' in schema), or what the output entails (e.g., URLs or base64 data). This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (image generation tool with nested parameters), lack of annotations, no output schema, and poor parameter coverage, the description is incomplete. It doesn't address key aspects like output format, authentication, or usage constraints, leaving the agent under-informed for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning beyond the input schema, which has 0% schema description coverage (no descriptions for top-level parameters). It doesn't explain what 'createImageVariationBody' or 'x-hapi-auth-state' represent, leaving all parameters undocumented. Given the low coverage, the description fails to compensate, resulting in poor parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('creates a variation') and resource ('of a given image'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'createImageEdit', which might also involve image modification, leaving room for confusion about when to use one versus the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'createImage' or 'createImageEdit'. It lacks context about prerequisites, such as needing a square PNG image, or any exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createModerationCInspect

Classifies if text violates OpenAI's Content Policy

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
createModerationBodyYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool classifies text for policy violations but lacks details on behavioral traits such as rate limits, authentication needs, error handling, or what the classification output entails (e.g., categories, confidence scores). This is a significant gap for a tool with potential sensitivity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero waste. It's front-loaded and efficiently conveys the core purpose without unnecessary elaboration, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (moderation with policy implications), lack of annotations, no output schema, and low schema coverage, the description is incomplete. It doesn't explain return values, error cases, or operational constraints, leaving gaps that could hinder correct tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions 'text' as the input but doesn't detail parameters like 'model' or 'x-hapi-auth-state'. The description adds minimal value beyond the schema, failing to fully address the coverage gap, warranting a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Classifies if text violates OpenAI's Content Policy.' It specifies the verb ('classifies') and the resource ('text'), though it doesn't explicitly differentiate from sibling tools like content moderation vs. image/audio/model operations, which is implied by the context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, typical use cases, or how it differs from other moderation or classification tools that might exist in a broader ecosystem.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createTranscriptionCInspect

Transcribes audio into the input language.

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
createTranscriptionBodyYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions the transformation (transcription) but doesn't address permissions, rate limits, costs, error conditions, or what the output looks like. For a tool that processes audio files and returns text, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence—and front-loaded with the core purpose. However, this brevity comes at the cost of completeness, making it under-specified rather than efficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (audio processing, multiple parameters with 0% schema coverage, no output schema, and no annotations), the description is severely incomplete. It doesn't explain inputs, outputs, behavior, or context, leaving the agent poorly equipped to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but fails to do so. It doesn't mention any parameters like file, model, or optional settings, leaving all 2 parameters (with nested object) undocumented. The phrase 'input language' vaguely hints at the 'language' parameter but without clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Transcribes audio into the input language' clearly states the core function (verb+resource) but is somewhat vague about scope and lacks differentiation from sibling tools like createTranslation. It doesn't specify what 'input language' refers to or mention the audio-to-text transformation explicitly enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like createTranslation or other audio processing tools. The description doesn't mention prerequisites, constraints, or typical use cases, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createTranslationCInspect

Translates audio into English.

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
createTranslationBodyYes
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers none. It doesn't mention whether this is a read or write operation, authentication requirements, rate limits, what happens to the input file, or what the output format looks like. This is inadequate for a tool that processes audio files.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just three words, with zero wasted language. It's front-loaded with the core purpose, though this brevity comes at the cost of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, nested objects, no annotations, and no output schema, the description is completely inadequate. It doesn't explain what the tool returns, how to use it properly, or any behavioral characteristics. The extreme brevity fails to provide necessary context for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions no parameters at all, while the schema shows 2 parameters with 0% description coverage. The description doesn't compensate for this gap - it doesn't mention required audio files, model selection, or any of the optional parameters like prompt, temperature, or response format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Translates audio into English' clearly states the verb (translates) and resource (audio) with a specific output language (English). However, it doesn't differentiate from the sibling tool 'createTranscription', which likely performs a similar audio-to-text function without the translation aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'createTranscription'. There's no mention of prerequisites, limitations, or appropriate contexts for audio translation versus transcription.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deleteModelBInspect

Delete a fine-tuned model. You must have the Owner role in your organization.

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the Owner role requirement, which is useful, but lacks details on whether deletion is permanent, reversible, or has side effects, and doesn't describe the response format or error conditions for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with zero waste: the first states the action and resource, and the second specifies a critical prerequisite. It's front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations, 0% schema coverage, and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., permanence), parameter meanings, and expected outcomes, leaving significant gaps for an AI agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It doesn't explain what 'model' refers to (e.g., model ID, name) or the purpose of 'x-hapi-auth-state', leaving both parameters semantically unclear beyond the schema's basic types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a fine-tuned model'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'listModels' or 'retrieveModel' beyond the obvious verb difference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides one usage guideline: 'You must have the Owner role in your organization.' This implies a prerequisite but doesn't specify when to use this tool versus alternatives like 'listModels' or 'retrieveModel', nor does it mention any exclusions or edge cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listModelsCInspect

Lists the currently available models, and provides basic information about each one such as the owner and availability.

ParametersJSON Schema
NameRequiredDescriptionDefault
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but provides minimal behavioral context. It mentions the tool lists models and provides basic info, but doesn't disclose critical traits like whether it's read-only (implied by 'Lists'), authentication requirements (suggested by the parameter), rate limits, pagination, or error handling. This leaves significant gaps for agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and output details. It's front-loaded with the main action and resource, with no wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, one undocumented parameter, and no output schema, the description is incomplete. It adequately states the purpose but lacks usage guidelines, parameter semantics, and behavioral details needed for a tool that likely involves authentication and returns model data. More context is required for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. The single parameter 'x-hapi-auth-state' is undocumented in both schema and description, leaving its purpose (likely authentication) unexplained. The description doesn't mention any parameters, failing to address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Lists') and resource ('currently available models'), specifying what information is provided ('basic information about each one such as the owner and availability'). It distinguishes from siblings like 'retrieveModel' (singular) and 'deleteModel' by focusing on listing all models, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, such as authentication implied by the 'x-hapi-auth-state' parameter, or contrast with siblings like 'retrieveModel' for getting details of a specific model. The description only states what it does, not when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

retrieveModelCInspect

Retrieves a model instance, providing basic information about the model such as the owner and permissioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves basic information, implying a read-only operation, but doesn't cover critical aspects like authentication needs (e.g., 'x-hapi-auth-state' parameter), error handling, rate limits, or what happens if the model doesn't exist. The description adds minimal context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, though it could be slightly more structured by separating usage details, but this is minor.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a retrieval tool with 2 parameters, 0% schema coverage, no annotations, and no output schema), the description is incomplete. It lacks details on parameter usage, behavioral traits (e.g., authentication, errors), and output format, making it inadequate for an agent to invoke the tool correctly without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It doesn't explain the 'model' parameter (e.g., what format or identifier is expected) or the optional 'x-hapi-auth-state' parameter (e.g., its purpose or usage). The description adds no meaning beyond what the schema provides, failing to address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('retrieves') and resource ('model instance'), specifying it provides basic information like owner and permissioning. However, it doesn't explicitly differentiate from sibling tools like 'listModels' (which lists multiple models) or 'deleteModel' (which removes a model), though the distinction is somewhat implied by the verb 'retrieves' versus 'list'/'delete'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing a specific model ID), exclusions, or comparisons to siblings like 'listModels' for broader queries. Usage is implied by the purpose but not clearly articulated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.