Skip to main content
Glama

get_prompt

Retrieve rendered prompt messages from the MCP Test MCP server by specifying a prompt name and required arguments.

Instructions

Get a rendered prompt from the connected MCP server.

Retrieves a prompt by name with the provided arguments and returns the rendered prompt messages.

Returns: Dictionary with rendered prompt including: - success: True if prompt was retrieved successfully - prompt: Object with name, description, and rendered messages - metadata: Request timing and server information

Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - prompt_not_found: Prompt doesn't exist on server - invalid_arguments: Arguments don't match prompt schema - execution_error: Prompt retrieval failed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesName of the prompt to retrieve
argumentsYesDictionary of arguments to pass to the prompt

Implementation Reference

  • The core handler function for the 'get_prompt' tool. Decorated with @mcp.tool for automatic registration. It verifies connection, calls the connected client's get_prompt, processes result messages (handling various content types), increments prompts_executed stat, provides user feedback via ctx.info/error, and returns detailed success/error response with metadata.
    @mcp.tool
    async def get_prompt(
        name: Annotated[str, "Name of the prompt to retrieve"],
        arguments: Annotated[dict[str, Any], "Dictionary of arguments to pass to the prompt"],
        ctx: Context
    ) -> dict[str, Any]:
        """Get a rendered prompt from the connected MCP server.
    
        Retrieves a prompt by name with the provided arguments and returns the
        rendered prompt messages.
    
        Returns:
            Dictionary with rendered prompt including:
            - success: True if prompt was retrieved successfully
            - prompt: Object with name, description, and rendered messages
            - metadata: Request timing and server information
    
        Raises:
            Returns error dict for various failure scenarios:
            - not_connected: No active connection
            - prompt_not_found: Prompt doesn't exist on server
            - invalid_arguments: Arguments don't match prompt schema
            - execution_error: Prompt retrieval failed
        """
        start_time = time.perf_counter()
    
        try:
            # Verify connection exists
            client, state = ConnectionManager.require_connection()
    
            # User-facing progress update
            await ctx.info(f"Getting prompt '{name}' with arguments")
            # Detailed technical log
            logger.info(
                f"Getting prompt '{name}' with arguments",
                extra={"prompt_name": name, "arguments": arguments},
            )
    
            # Get the prompt
            prompt_start = time.perf_counter()
            result = await client.get_prompt(name, arguments)
            prompt_elapsed_ms = (time.perf_counter() - prompt_start) * 1000
    
            # Increment statistics
            ConnectionManager.increment_stat("prompts_executed")
    
            total_elapsed_ms = (time.perf_counter() - start_time) * 1000
    
            # Extract prompt messages
            messages = []
            if hasattr(result, "messages") and result.messages:
                for message in result.messages:
                    message_dict: dict[str, Any] = {
                        "role": message.role,
                    }
                    # Handle different content types
                    # Content can be: TextContent, ImageContent, AudioContent, ResourceLink, EmbeddedResource
                    if hasattr(message, "content"):
                        content = message.content
                        if hasattr(content, "type"):
                            # Structured content with type discriminator
                            content_dict: dict[str, Any] = {
                                "type": content.type,
                            }
                            # Handle type-specific fields
                            if content.type == "text" and hasattr(content, "text"):
                                content_dict["text"] = content.text
                            elif content.type == "image" and hasattr(content, "data"):
                                content_dict["data"] = content.data
                                if hasattr(content, "mimeType"):
                                    content_dict["mimeType"] = content.mimeType
                            elif content.type == "audio" and hasattr(content, "data"):
                                content_dict["data"] = content.data
                                if hasattr(content, "mimeType"):
                                    content_dict["mimeType"] = content.mimeType
                            elif content.type == "resource":
                                # ResourceLink or EmbeddedResource
                                if hasattr(content, "uri"):
                                    content_dict["uri"] = content.uri
                                if hasattr(content, "resource"):
                                    content_dict["resource"] = content.resource
                            message_dict["content"] = content_dict
                        else:
                            # Fallback for simple/unknown content types
                            message_dict["content"] = {"type": "text", "text": str(content)}
                    messages.append(message_dict)
    
            prompt_info = {
                "name": name,
                "description": result.description if hasattr(result, "description") and result.description else "",
                "messages": messages,
            }
    
            # User-facing success update
            await ctx.info(f"Prompt '{name}' retrieved successfully with {len(messages)} messages")
            # Detailed technical log
            logger.info(
                f"Prompt '{name}' retrieved successfully",
                extra={
                    "prompt_name": name,
                    "message_count": len(messages),
                    "duration_ms": prompt_elapsed_ms,
                },
            )
    
            return {
                "success": True,
                "prompt": prompt_info,
                "metadata": {
                    "request_time_ms": round(total_elapsed_ms, 2),
                    "server_url": state.server_url,
                    "connection_statistics": state.statistics,
                },
            }
    
        except ConnectionError as e:
            elapsed_ms = (time.perf_counter() - start_time) * 1000
    
            # User-facing error update
            await ctx.error(f"Not connected when getting prompt '{name}': {str(e)}")
            # Detailed technical log
            logger.error(
                f"Not connected when getting prompt '{name}': {str(e)}",
                extra={"prompt_name": name, "duration_ms": elapsed_ms},
            )
    
            return {
                "success": False,
                "error": {
                    "error_type": "not_connected",
                    "message": str(e),
                    "details": {"prompt_name": name},
                    "suggestion": "Use connect_to_server() to establish a connection first",
                },
                "prompt": None,
                "metadata": {
                    "request_time_ms": round(elapsed_ms, 2),
                },
            }
    
        except Exception as e:
            elapsed_ms = (time.perf_counter() - start_time) * 1000
    
            # Determine error type based on exception message
            error_type = "execution_error"
            suggestion = "Check the prompt name and arguments, then retry"
    
            error_msg = str(e).lower()
            if "not found" in error_msg or "unknown prompt" in error_msg or "no prompt" in error_msg:
                error_type = "prompt_not_found"
                suggestion = f"Prompt '{name}' does not exist on the server. Use list_prompts() to see available prompts"
            elif "argument" in error_msg or "parameter" in error_msg or "validation" in error_msg or "required" in error_msg:
                error_type = "invalid_arguments"
                suggestion = f"Arguments do not match the prompt schema. Use list_prompts() to see the correct schema for '{name}'"
    
            # User-facing error update
            await ctx.error(f"Failed to get prompt '{name}': {str(e)}")
            # Detailed technical log
            logger.error(
                f"Failed to get prompt '{name}': {str(e)}",
                extra={
                    "prompt_name": name,
                    "arguments": arguments,
                    "error_type": error_type,
                    "duration_ms": elapsed_ms,
                },
            )
    
            # Increment error counter
            ConnectionManager.increment_stat("errors")
    
            return {
                "success": False,
                "error": {
                    "error_type": error_type,
                    "message": f"Failed to get prompt '{name}': {str(e)}",
                    "details": {
                        "prompt_name": name,
                        "arguments": arguments,
                        "exception_type": type(e).__name__,
                    },
                    "suggestion": suggestion,
                },
                "prompt": None,
                "metadata": {
                    "request_time_ms": round(elapsed_ms, 2),
                },
            }
  • Import statement in the main server module that loads the prompts module, triggering the execution of @mcp.tool decorators and thus registering the get_prompt tool on the shared FastMCP instance.
    from .tools import connection, tools, resources, prompts, llm
  • The shared FastMCP server instance 'mcp' to which all tools are registered via @mcp.tool decorators used in tool modules like prompts.py.
    mcp = FastMCP(name="mcp-test-mcp")

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rdwj/mcp-test-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server