Skip to main content
Glama

get_prompt

Retrieve project-specific prompts for testing best practices and code analysis to enhance development workflows.

Instructions

Get a prompt designed for this codebase. The prompts include:

  • test_guide.md: Guide for testing best practices in this library

  • code_analysis: Analyze code quality

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
prompt_nameYesThe name of the prompt to retrieve

Implementation Reference

  • Executes the 'get_prompt' tool: validates input, enforces config filter, finds prompt in config, retrieves content via helper, returns as TextContent.
    if name == "get_prompt": prompt_name = arguments.get("prompt_name") if not prompt_name: raise ExecutionError( "HooksMCP Error: 'prompt_name' argument is required for get_prompt tool" ) # Enforce get_prompt_tool_filter if present if hooks_mcp_config.get_prompt_tool_filter is not None: # If filter is empty, don't allow any prompts (shouldn't happen since tool isn't exposed) if not hooks_mcp_config.get_prompt_tool_filter: raise ExecutionError( "HooksMCP Error: No prompts are available through get_prompt tool" ) # Otherwise, check if prompt is in the filter list if prompt_name not in hooks_mcp_config.get_prompt_tool_filter: available_prompts = ", ".join( hooks_mcp_config.get_prompt_tool_filter ) raise ExecutionError( f"HooksMCP Error: Prompt '{prompt_name}' is not available through get_prompt tool. " f"Available prompts: {available_prompts}" ) # Find the prompt by name config_prompt = next( (p for p in hooks_mcp_config.prompts if p.name == prompt_name), None ) if not config_prompt: raise ExecutionError( f"HooksMCP Error: Prompt '{prompt_name}' not found" ) # Get prompt content prompt_content = get_prompt_content(config_prompt, config_path) # Return the prompt content as text return [TextContent(type="text", text=prompt_content)]
  • Creates and registers the 'get_prompt' Tool object with dynamic schema (prompt_name enum from filtered config prompts) in the tools list.
    get_prompt_tool = Tool( name="get_prompt", description=tool_description, inputSchema={ "type": "object", "properties": { "prompt_name": { "type": "string", "description": "The name of the prompt to retrieve", "enum": prompt_names, } }, "required": ["prompt_name"], }, ) tools.append(get_prompt_tool)
  • Defines the input schema for 'get_prompt' tool: object with required 'prompt_name' string enum of available prompts.
    get_prompt_tool = Tool( name="get_prompt", description=tool_description, inputSchema={ "type": "object", "properties": { "prompt_name": { "type": "string", "description": "The name of the prompt to retrieve", "enum": prompt_names, } }, "required": ["prompt_name"], }, ) tools.append(get_prompt_tool)
  • Helper function to load prompt content from inline text or file path relative to config, used by both tool handler and MCP prompt handler.
    def get_prompt_content(config_prompt: ConfigPrompt, config_path: Path) -> str: """ Get the content of a prompt from either the inline text or file. Args: config_prompt: The prompt configuration config_path: Path to the configuration file (used for resolving relative paths) Returns: The prompt content as a string """ if config_prompt.prompt_text: return config_prompt.prompt_text elif config_prompt.prompt_file: prompt_file_path = config_path.parent / config_prompt.prompt_file try: return prompt_file_path.read_text(encoding="utf-8") except Exception as e: raise ExecutionError( f"HooksMCP Error: Failed to read prompt file '{config_prompt.prompt_file}': {str(e)}" ) else: raise ExecutionError( f"HooksMCP Error: Prompt '{config_prompt.name}' has no content" )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/scosman/actions_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server