Skip to main content
Glama

get_prompt_template

Retrieve the canonical English prompt template or specific sections for the MCP-Ambari-API server, enabling structured access to Hadoop cluster management guidance.

Instructions

Return the canonical English prompt template (optionally a specific section).

Simplified per project decision: only a single English template file PROMPT_TEMPLATE.md is maintained.

Args: section: (optional) section number or keyword (case-insensitive) e.g. "1", "purpose", "tool map". mode: (optional) if "headings" returns just the list of section headings with numeric indices.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sectionNo
modeNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool named 'get_prompt_template' is listed in the package's __all__ list, indicating it is registered and exported for use by external code or tests. The import from .mcp_main suggests the actual implementation is there, but could not be resolved.
    __all__ = [
    	# tools (selective explicit export for tests / external use)
    	'get_prompt_template'
    ]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns a template from a specific file ('PROMPT_TEMPLATE.md') and describes optional behaviors for section and mode parameters. However, it doesn't cover important behavioral traits like error handling, response format details (though an output schema exists), or any rate limits or permissions needed. The description adds some context but leaves gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a brief project context, and then details the parameters in a clear 'Args:' section. Every sentence earns its place by adding value, with no redundant or verbose language. The structure is logical and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 optional parameters) and the presence of an output schema (which handles return values), the description is fairly complete. It covers the purpose, project context, and parameter semantics adequately. However, it lacks details on behavioral aspects like error cases or performance, which would be beneficial despite the output schema. Overall, it's sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'section' can be a number or keyword (e.g., '1', 'purpose', 'tool map') and is case-insensitive, and that 'mode' with value 'headings' returns section headings with indices. This clarifies the semantics and usage of both parameters, compensating well for the lack of schema descriptions. Since there are only 2 parameters and the description covers them effectively, a score of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the canonical English prompt template (optionally a specific section).' It specifies the verb ('Return') and resource ('canonical English prompt template'), and mentions the optional section parameter. However, it doesn't explicitly differentiate this tool from its siblings (which are mostly about system monitoring and management), though the purpose is distinct enough to imply differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by explaining that it returns a template, optionally filtered by section or mode. It mentions the simplified project decision about a single template file, which gives context. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., compared to other 'get_' tools), and doesn't specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/call518/MCP-Ambari-API'

If you have feedback or need assistance with the MCP directory API, please join our Discord server