list_prompts
Retrieve all available prompts from an MCP server with names, descriptions, and argument schemas for accurate invocation.
Instructions
List all prompts available on the connected MCP server.
Retrieves comprehensive information about all prompts exposed by the target server, including names, descriptions, and complete argument schemas to enable accurate prompt invocation.
Returns: Dictionary with prompt listing including: - success: True on successful retrieval - prompts: List of prompt objects with name, description, and arguments schema - metadata: Total count, server info, timing information
Raises: Returns error dict if not connected or retrieval fails
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- The @mcp.tool decorated handler function that implements the list_prompts tool. It connects to a target MCP server, retrieves the list of prompts, processes their schemas, and returns structured data with success/error handling, logging, and progress updates.@mcp.tool async def list_prompts(ctx: Context) -> dict[str, Any]: """List all prompts available on the connected MCP server. Retrieves comprehensive information about all prompts exposed by the target server, including names, descriptions, and complete argument schemas to enable accurate prompt invocation. Returns: Dictionary with prompt listing including: - success: True on successful retrieval - prompts: List of prompt objects with name, description, and arguments schema - metadata: Total count, server info, timing information Raises: Returns error dict if not connected or retrieval fails """ start_time = time.perf_counter() try: # Verify connection exists client, state = ConnectionManager.require_connection() # User-facing progress update await ctx.info("Listing prompts from connected MCP server") # Detailed technical log logger.info("Listing prompts from connected MCP server") # Get prompts from the server prompts_result = await client.list_prompts() elapsed_ms = (time.perf_counter() - start_time) * 1000 # Convert prompts to dictionary format with full argument schemas # Note: client.list_prompts() returns a list directly, not an object with .prompts prompts_list = [] for prompt in prompts_result: # Extract arguments schema arguments = [] if hasattr(prompt, "arguments") and prompt.arguments: for arg in prompt.arguments: arg_dict = { "name": arg.name, "description": arg.description if arg.description else "", "required": arg.required if hasattr(arg, "required") else False, } arguments.append(arg_dict) prompt_dict = { "name": prompt.name, "description": prompt.description if prompt.description else "", "arguments": arguments, } prompts_list.append(prompt_dict) metadata = { "total_prompts": len(prompts_list), "server_url": state.server_url, "retrieved_at": time.time(), "request_time_ms": round(elapsed_ms, 2), } # Add server info if available if state.server_info: metadata["server_name"] = state.server_info.get("name", "unknown") metadata["server_version"] = state.server_info.get("version") # User-facing success update await ctx.info(f"Retrieved {len(prompts_list)} prompts from server") # Detailed technical log logger.info( f"Retrieved {len(prompts_list)} prompts from server", extra={ "prompt_count": len(prompts_list), "server_url": state.server_url, "duration_ms": elapsed_ms, }, ) return { "success": True, "prompts": prompts_list, "metadata": metadata, } except ConnectionError as e: elapsed_ms = (time.perf_counter() - start_time) * 1000 # User-facing error update await ctx.error(f"Not connected: {str(e)}") # Detailed technical log logger.error(f"Not connected: {str(e)}", extra={"duration_ms": elapsed_ms}) return { "success": False, "error": { "error_type": "not_connected", "message": str(e), "details": {}, "suggestion": "Use connect_to_server() to establish a connection first", }, "prompts": [], "metadata": { "request_time_ms": round(elapsed_ms, 2), }, } except Exception as e: elapsed_ms = (time.perf_counter() - start_time) * 1000 # User-facing error update await ctx.error(f"Failed to list prompts: {str(e)}") # Detailed technical log logger.exception("Failed to list prompts", extra={"duration_ms": elapsed_ms}) # Increment error counter ConnectionManager.increment_stat("errors") return { "success": False, "error": { "error_type": "execution_error", "message": f"Failed to list prompts: {str(e)}", "details": {"exception_type": type(e).__name__}, "suggestion": "Check that the server supports the prompts capability and is responding correctly", }, "prompts": [], "metadata": { "request_time_ms": round(elapsed_ms, 2), }, }
- node-wrapper/python-src/src/mcp_test_mcp/server.py:98-98 (registration)Import of the prompts module in the main server.py file, which triggers automatic tool registration via @mcp.tool decorators in prompts.py.from .tools import connection, tools, resources, prompts, llm