Skip to main content
Glama

Server Details

Give your AI assistant access to real Helm chart data. No more hallucinated values.yaml files.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Kubedoll-Heavy-Industries/helm-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_dependencies retrieves chart dependencies, get_notes fetches post-install instructions, get_values obtains configuration values, get_versions lists available versions, and search_charts searches for charts. There is no overlap in functionality, making tool selection unambiguous for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with 'get_' or 'search_' prefixes, using snake_case uniformly. This predictability enhances readability and usability, with no deviations in naming conventions across the set.

Tool Count4/5

With 5 tools, the count is reasonable for a Helm chart management server, covering key operations like dependency retrieval, note viewing, value inspection, version listing, and chart search. It is slightly lean but well-scoped, lacking only minor operations like chart installation or deletion, which might be outside the server's intended scope.

Completeness4/5

The toolset provides comprehensive read-only coverage for inspecting Helm charts, including dependencies, notes, values, versions, and search. It supports both HTTP/HTTPS repos and OCI registries, with clear limitations noted for OCI. Minor gaps exist, such as no tools for installing, updating, or deleting charts, but these might be intentional for a focused inspection server, and agents can work around this by using other methods for write operations.

Available Tools

5 tools
get_dependenciesB
Read-only
Inspect

Get chart dependencies. Supports both HTTP/HTTPS repos and OCI registries (oci://).

ParametersJSON Schema
NameRequiredDescriptionDefault
chart_nameYesChart name (e.g. postgresql)
chart_versionNoChart version (defaults to latest)
repository_urlYesHelm repository URL (e.g. https://charts.bitnami.com/bitnami) or OCI registry (e.g. oci://ghcr.io/traefik/helm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
versionYesResolved chart version (especially useful when chart_version was omitted and latest was used)
dependenciesYesChart dependencies
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, indicating this is a safe read operation with potentially unbounded data. The description adds value by specifying supported protocols (HTTP/HTTPS and OCI registries), which provides useful context beyond annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, or what 'dependencies' entail (e.g., format or depth).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—just two sentences that efficiently convey the core purpose and key capability (protocol support). Every word earns its place, with no redundant information or fluff. It's front-loaded with the main action, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 2 required), rich annotations (readOnlyHint, openWorldHint), 100% schema coverage, and presence of an output schema, the description is reasonably complete. It covers the essential 'what' and protocol support, though it could benefit from more usage guidance. The output schema likely handles return values, reducing the burden on the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (repository_url, chart_name, chart_version) with clear descriptions. The description adds marginal value by reinforcing the protocol support mentioned in the repository_url schema, but doesn't provide additional semantic context beyond what's already in the structured fields. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('chart dependencies'), making the purpose immediately understandable. It distinguishes from siblings like get_notes, get_values, and get_versions by focusing specifically on dependencies rather than other chart metadata. However, it doesn't explicitly contrast with search_charts, which might also involve dependencies indirectly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_versions or search_charts. It mentions supported protocols (HTTP/HTTPS and OCI), but this is more about capability than usage context. There are no explicit when/when-not statements or prerequisites for selecting this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_notesA
Read-only
Inspect

Get chart NOTES.txt (post-install instructions). Supports both HTTP/HTTPS repos and OCI registries (oci://).

ParametersJSON Schema
NameRequiredDescriptionDefault
chart_nameYesChart name (e.g. postgresql)
chart_versionNoChart version (defaults to latest)
repository_urlYesHelm repository URL (e.g. https://charts.bitnami.com/bitnami) or OCI registry (e.g. oci://ghcr.io/traefik/helm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
notesYesContents of NOTES.txt
versionYesResolved chart version (especially useful when chart_version was omitted and latest was used)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, indicating a safe, read-only operation with open-world assumptions. The description adds useful context about supported repository types (HTTP/HTTPS and OCI registries), but does not disclose additional behavioral traits such as rate limits, authentication needs, or error handling. There is no contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of just two sentences that efficiently convey the tool's purpose and key usage context. Every sentence earns its place without redundancy or unnecessary elaboration, making it easy for an AI agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 2 required), rich annotations (readOnlyHint, openWorldHint), and the presence of an output schema, the description is mostly complete. It covers the core purpose and repository support, but could benefit from more behavioral context (e.g., error cases or output format hints) to fully compensate for the lack of explicit guidelines on when to use versus siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the input schema. The description adds minimal value beyond the schema by implying repository_url supports both HTTP/HTTPS and OCI formats, but does not provide additional syntax, format details, or meaning for parameters like chart_name or chart_version. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get chart NOTES.txt') and resource ('post-install instructions'), distinguishing it from sibling tools like get_dependencies, get_values, get_versions, and search_charts. It specifies the exact file being retrieved (NOTES.txt) and its purpose (post-install instructions), avoiding vagueness or tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by stating it 'Supports both HTTP/HTTPS repos and OCI registries (oci://)', which helps guide usage based on repository type. However, it does not explicitly mention when to use this tool versus alternatives like get_dependencies or get_values, nor does it provide exclusions or prerequisites, leaving some room for improvement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_valuesA
Read-only
Inspect

Get chart values.yaml with optional JSON schema. Uses depth limiting (default 2) to show structure without overwhelming context. Use path to drill into specific sections, depth=0 for full YAML. Supports both HTTP/HTTPS repos and OCI registries (oci://).

ParametersJSON Schema
NameRequiredDescriptionDefault
pathNoYAML path (e.g. .ingress.enabled)
depthNoMax nesting depth (default 2, 0 for unlimited)
chart_nameYesChart name (e.g. postgresql)
chart_versionNoChart version (defaults to latest)
show_commentsNoPreserve YAML comments
show_defaultsNoInclude default values
include_schemaNoInclude values.schema.json in response
repository_urlYesHelm repository URL (e.g. https://charts.bitnami.com/bitnami) or OCI registry (e.g. oci://ghcr.io/traefik/helm)
max_array_itemsNoMax array items before truncation (default 3, 0 for unlimited)

Output Schema

ParametersJSON Schema
NameRequiredDescription
pathNoExtracted path, if specified
schemaNoJSON Schema for values (if include_schema=true and schema exists)
valuesYesValues content (YAML)
versionYesResolved chart version (especially useful when chart_version was omitted and latest was used)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=true, which the description does not contradict. The description adds valuable behavioral context beyond annotations: it explains depth limiting with a default of 2, supports HTTP/HTTPS repos and OCI registries, and mentions optional JSON schema inclusion. This provides practical usage insights that annotations alone do not cover, though it could mention rate limits or authentication needs for a higher score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then explains key behavioral traits (depth limiting, path usage, protocol support) in two efficient sentences. Every sentence adds value without redundancy, making it concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, read-only operation), the description is complete enough. It covers the main purpose, key usage guidelines, and behavioral traits. With annotations providing safety hints and an output schema existing (so return values need not be explained), the description fills in necessary context without overloading, making it sufficient for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds some semantic context: it mentions depth limiting (default 2) and path usage, which aligns with schema details but doesn't provide significant additional meaning. With high schema coverage, the baseline is 3, as the description compensates minimally beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get chart values.yaml with optional JSON schema.' It specifies the exact resource (chart values.yaml) and verb (get), and distinguishes it from siblings like get_dependencies, get_notes, get_versions, and search_charts by focusing on values.yaml retrieval rather than dependencies, release notes, versions, or chart searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Uses depth limiting (default 2) to show structure without overwhelming context. Use path to drill into specific sections, depth=0 for full YAML.' It explains when to adjust depth and path for different use cases. However, it does not explicitly state when to use this tool versus its siblings or any exclusions, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_versionsA
Read-only
Inspect

Get available versions of a chart (newest first). Supports both HTTP/HTTPS repos and OCI registries (oci://).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum results (default 20, max 100)
chart_nameYesChart name (e.g. postgresql)
repository_urlYesHelm repository URL (e.g. https://charts.bitnami.com/bitnami) or OCI registry (e.g. oci://ghcr.io/traefik/helm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalYesTotal versions available (may exceed returned results if limit applied)
versionsYesChart versions (newest first)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true, covering safety and scope. The description adds valuable context beyond this by specifying the sorting order ('newest first') and supported protocols, which are not captured in annotations. It does not contradict annotations and provides useful behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently includes essential details in two sentences without redundancy. Every sentence adds value, such as the sorting order and protocol support, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and the presence of an output schema, the description is largely complete. It covers purpose, sorting, and protocol support, though it could benefit from more explicit usage guidelines. The output schema reduces the need to explain return values, making this adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters. The description adds minimal value by mentioning protocol support (e.g., 'oci://'), which relates to repository_url but doesn't provide additional syntax or format details beyond the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get available versions'), resource ('of a chart'), and scope ('newest first'), distinguishing it from sibling tools like get_dependencies, get_notes, get_values, and search_charts. It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying supported protocols ('HTTP/HTTPS repos and OCI registries'), but it does not provide explicit guidance on when to use this tool versus alternatives like search_charts. There is no mention of prerequisites, exclusions, or comparative scenarios, leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_chartsA
Read-only
Inspect

Search for charts in a Helm repository. Note: OCI registries (oci://) do not support browsing; use get_values or get_versions with a specific chart name instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum results (default 50, max 200)
searchNoFilter by name (case-insensitive substring)
repository_urlYesHelm repository URL (e.g. https://charts.bitnami.com/bitnami) or OCI registry (e.g. oci://ghcr.io/traefik/helm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalYesTotal matching charts (may exceed returned results if limit applied)
chartsYesChart names
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only and open-world hints, which the description doesn't contradict. It adds valuable behavioral context by disclosing that OCI registries don't support browsing, a constraint not covered by annotations, enhancing the agent's understanding of limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second provides critical usage guidance. It's front-loaded and efficiently structured, earning its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with parameters), rich annotations (readOnlyHint, openWorldHint), and the presence of an output schema, the description is complete enough. It covers purpose, key constraints (OCI limitation), and alternatives, leaving output details to the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters like limit, search, and repository_url. The description adds no extra parameter details beyond the schema, but it implies usage context (e.g., OCI vs. non-OCI URLs), which provides some marginal semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'search' and resource 'charts in a Helm repository', making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like get_dependencies or get_notes, which might also involve chart-related operations, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when not to use this tool (for OCI registries) and names specific alternatives (get_values or get_versions), offering clear context for tool selection versus siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.