Skip to main content
Glama

get_prompt

Retrieve rendered prompt messages by name with arguments from an MCP server for testing and validation purposes.

Instructions

Get a rendered prompt from the connected MCP server.

Retrieves a prompt by name with the provided arguments and returns the rendered prompt messages.

Returns: Dictionary with rendered prompt including: - success: True if prompt was retrieved successfully - prompt: Object with name, description, and rendered messages - metadata: Request timing and server information

Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - prompt_not_found: Prompt doesn't exist on server - invalid_arguments: Arguments don't match prompt schema - execution_error: Prompt retrieval failed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesName of the prompt to retrieve
argumentsYesDictionary of arguments to pass to the prompt

Implementation Reference

  • The core handler function for the 'get_prompt' MCP tool. Decorated with @mcp.tool for registration. It verifies connection, calls the server's get_prompt, processes messages from various content types, handles multiple error cases (connection, not found, invalid args), provides structured responses with metadata, and uses logging and ctx updates.
    @mcp.tool async def get_prompt( name: Annotated[str, "Name of the prompt to retrieve"], arguments: Annotated[dict[str, Any], "Dictionary of arguments to pass to the prompt"], ctx: Context ) -> dict[str, Any]: """Get a rendered prompt from the connected MCP server. Retrieves a prompt by name with the provided arguments and returns the rendered prompt messages. Returns: Dictionary with rendered prompt including: - success: True if prompt was retrieved successfully - prompt: Object with name, description, and rendered messages - metadata: Request timing and server information Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - prompt_not_found: Prompt doesn't exist on server - invalid_arguments: Arguments don't match prompt schema - execution_error: Prompt retrieval failed """ start_time = time.perf_counter() try: # Verify connection exists client, state = ConnectionManager.require_connection() # User-facing progress update await ctx.info(f"Getting prompt '{name}' with arguments") # Detailed technical log logger.info( f"Getting prompt '{name}' with arguments", extra={"prompt_name": name, "arguments": arguments}, ) # Get the prompt prompt_start = time.perf_counter() result = await client.get_prompt(name, arguments) prompt_elapsed_ms = (time.perf_counter() - prompt_start) * 1000 # Increment statistics ConnectionManager.increment_stat("prompts_executed") total_elapsed_ms = (time.perf_counter() - start_time) * 1000 # Extract prompt messages messages = [] if hasattr(result, "messages") and result.messages: for message in result.messages: message_dict: dict[str, Any] = { "role": message.role, } # Handle different content types # Content can be: TextContent, ImageContent, AudioContent, ResourceLink, EmbeddedResource if hasattr(message, "content"): content = message.content if hasattr(content, "type"): # Structured content with type discriminator content_dict: dict[str, Any] = { "type": content.type, } # Handle type-specific fields if content.type == "text" and hasattr(content, "text"): content_dict["text"] = content.text elif content.type == "image" and hasattr(content, "data"): content_dict["data"] = content.data if hasattr(content, "mimeType"): content_dict["mimeType"] = content.mimeType elif content.type == "audio" and hasattr(content, "data"): content_dict["data"] = content.data if hasattr(content, "mimeType"): content_dict["mimeType"] = content.mimeType elif content.type == "resource": # ResourceLink or EmbeddedResource if hasattr(content, "uri"): content_dict["uri"] = content.uri if hasattr(content, "resource"): content_dict["resource"] = content.resource message_dict["content"] = content_dict else: # Fallback for simple/unknown content types message_dict["content"] = {"type": "text", "text": str(content)} messages.append(message_dict) prompt_info = { "name": name, "description": result.description if hasattr(result, "description") and result.description else "", "messages": messages, } # User-facing success update await ctx.info(f"Prompt '{name}' retrieved successfully with {len(messages)} messages") # Detailed technical log logger.info( f"Prompt '{name}' retrieved successfully", extra={ "prompt_name": name, "message_count": len(messages), "duration_ms": prompt_elapsed_ms, }, ) return { "success": True, "prompt": prompt_info, "metadata": { "request_time_ms": round(total_elapsed_ms, 2), "server_url": state.server_url, "connection_statistics": state.statistics, }, } except ConnectionError as e: elapsed_ms = (time.perf_counter() - start_time) * 1000 # User-facing error update await ctx.error(f"Not connected when getting prompt '{name}': {str(e)}") # Detailed technical log logger.error( f"Not connected when getting prompt '{name}': {str(e)}", extra={"prompt_name": name, "duration_ms": elapsed_ms}, ) return { "success": False, "error": { "error_type": "not_connected", "message": str(e), "details": {"prompt_name": name}, "suggestion": "Use connect_to_server() to establish a connection first", }, "prompt": None, "metadata": { "request_time_ms": round(elapsed_ms, 2), }, } except Exception as e: elapsed_ms = (time.perf_counter() - start_time) * 1000 # Determine error type based on exception message error_type = "execution_error" suggestion = "Check the prompt name and arguments, then retry" error_msg = str(e).lower() if "not found" in error_msg or "unknown prompt" in error_msg or "no prompt" in error_msg: error_type = "prompt_not_found" suggestion = f"Prompt '{name}' does not exist on the server. Use list_prompts() to see available prompts" elif "argument" in error_msg or "parameter" in error_msg or "validation" in error_msg or "required" in error_msg: error_type = "invalid_arguments" suggestion = f"Arguments do not match the prompt schema. Use list_prompts() to see the correct schema for '{name}'" # User-facing error update await ctx.error(f"Failed to get prompt '{name}': {str(e)}") # Detailed technical log logger.error( f"Failed to get prompt '{name}': {str(e)}", extra={ "prompt_name": name, "arguments": arguments, "error_type": error_type, "duration_ms": elapsed_ms, }, ) # Increment error counter ConnectionManager.increment_stat("errors") return { "success": False, "error": { "error_type": error_type, "message": f"Failed to get prompt '{name}': {str(e)}", "details": { "prompt_name": name, "arguments": arguments, "exception_type": type(e).__name__, }, "suggestion": suggestion, }, "prompt": None, "metadata": { "request_time_ms": round(elapsed_ms, 2), }, }
  • Input schema defined via Annotated types in the function signature, specifying 'name' as string, 'arguments' as dict, and Context.
    name: Annotated[str, "Name of the prompt to retrieve"], arguments: Annotated[dict[str, Any], "Dictionary of arguments to pass to the prompt"], ctx: Context ) -> dict[str, Any]:
  • The @mcp.tool decorator registers the get_prompt function as an MCP tool.
    @mcp.tool

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rdwj/mcp-test-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server