Skip to main content
Glama

get_prompt

Retrieve project-specific prompts for testing best practices and code analysis to enhance development workflows.

Instructions

Get a prompt designed for this codebase. The prompts include:

  • test_guide.md: Guide for testing best practices in this library

  • code_analysis: Analyze code quality

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
prompt_nameYesThe name of the prompt to retrieve

Implementation Reference

  • Executes the 'get_prompt' tool: validates input, enforces config filter, finds prompt in config, retrieves content via helper, returns as TextContent.
    if name == "get_prompt":
        prompt_name = arguments.get("prompt_name")
        if not prompt_name:
            raise ExecutionError(
                "HooksMCP Error: 'prompt_name' argument is required for get_prompt tool"
            )
    
        # Enforce get_prompt_tool_filter if present
        if hooks_mcp_config.get_prompt_tool_filter is not None:
            # If filter is empty, don't allow any prompts (shouldn't happen since tool isn't exposed)
            if not hooks_mcp_config.get_prompt_tool_filter:
                raise ExecutionError(
                    "HooksMCP Error: No prompts are available through get_prompt tool"
                )
            # Otherwise, check if prompt is in the filter list
            if prompt_name not in hooks_mcp_config.get_prompt_tool_filter:
                available_prompts = ", ".join(
                    hooks_mcp_config.get_prompt_tool_filter
                )
                raise ExecutionError(
                    f"HooksMCP Error: Prompt '{prompt_name}' is not available through get_prompt tool. "
                    f"Available prompts: {available_prompts}"
                )
    
        # Find the prompt by name
        config_prompt = next(
            (p for p in hooks_mcp_config.prompts if p.name == prompt_name), None
        )
        if not config_prompt:
            raise ExecutionError(
                f"HooksMCP Error: Prompt '{prompt_name}' not found"
            )
    
        # Get prompt content
        prompt_content = get_prompt_content(config_prompt, config_path)
    
        # Return the prompt content as text
        return [TextContent(type="text", text=prompt_content)]
  • Creates and registers the 'get_prompt' Tool object with dynamic schema (prompt_name enum from filtered config prompts) in the tools list.
    get_prompt_tool = Tool(
        name="get_prompt",
        description=tool_description,
        inputSchema={
            "type": "object",
            "properties": {
                "prompt_name": {
                    "type": "string",
                    "description": "The name of the prompt to retrieve",
                    "enum": prompt_names,
                }
            },
            "required": ["prompt_name"],
        },
    )
    tools.append(get_prompt_tool)
  • Defines the input schema for 'get_prompt' tool: object with required 'prompt_name' string enum of available prompts.
    get_prompt_tool = Tool(
        name="get_prompt",
        description=tool_description,
        inputSchema={
            "type": "object",
            "properties": {
                "prompt_name": {
                    "type": "string",
                    "description": "The name of the prompt to retrieve",
                    "enum": prompt_names,
                }
            },
            "required": ["prompt_name"],
        },
    )
    tools.append(get_prompt_tool)
  • Helper function to load prompt content from inline text or file path relative to config, used by both tool handler and MCP prompt handler.
    def get_prompt_content(config_prompt: ConfigPrompt, config_path: Path) -> str:
        """
        Get the content of a prompt from either the inline text or file.
    
        Args:
            config_prompt: The prompt configuration
            config_path: Path to the configuration file (used for resolving relative paths)
    
        Returns:
            The prompt content as a string
        """
        if config_prompt.prompt_text:
            return config_prompt.prompt_text
        elif config_prompt.prompt_file:
            prompt_file_path = config_path.parent / config_prompt.prompt_file
            try:
                return prompt_file_path.read_text(encoding="utf-8")
            except Exception as e:
                raise ExecutionError(
                    f"HooksMCP Error: Failed to read prompt file '{config_prompt.prompt_file}': {str(e)}"
                )
        else:
            raise ExecutionError(
                f"HooksMCP Error: Prompt '{config_prompt.name}' has no content"
            )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. The description states it 'gets' a prompt, implying a read operation, but doesn't specify whether this requires authentication, has rate limits, returns structured data or raw text, or what happens with invalid prompt names. For a tool with zero annotation coverage, this is inadequate behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences. The first sentence states the core purpose, and the second provides specific examples in a bullet-like format. There's no wasted text, though the structure could be slightly improved by integrating the examples more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what format the prompts are returned in (markdown text? structured data?), whether there are additional prompts beyond the two listed, or how this tool fits within the codebase context alongside sibling testing/analysis tools. For a tool in a development environment with multiple sibling tools, more contextual information would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single enum parameter clearly documented. The description lists the two available prompts ('test_guide.md' and 'code_analysis'), which aligns with the enum values but doesn't add meaningful semantic context beyond what the schema already provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a prompt designed for this codebase' with specific examples of what prompts are available. It uses a specific verb ('Get') and resource ('prompt'), but doesn't explicitly differentiate from sibling tools like 'all_tests' or 'check_format' which appear to be testing/analysis tools rather than prompt retrieval tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it lists available prompts, it doesn't explain when to retrieve prompts versus using sibling tools like 'test_file' or 'code_analysis' (if that's a sibling tool's function). There's no mention of prerequisites, timing considerations, or alternative approaches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/scosman/actions_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server