Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
product-recommendation-agent-google-agent-engine-and-langgraph.md7.28 kB
--- description: >- This notebook is adapted from Google's "Building and Deploying a LangGraph Application with Agent Engine in Vertex AI" --- # Product Recommendation Agent: Google Agent Engine & LangGraph {% hint style="info" %} Original Author(s): [Kristopher Overholt](https://github.com/koverholt) {% endhint %} {% embed url="https://colab.research.google.com/github/arize-ai/phoenix/blob/main/tutorials/tracing/google_agent_engine_tracing_tutorial.ipynb" %} This tutorial demonstrates how to build, deploy, and trace a product recommendation agent using Google's Agent Engine with LangGraph. You'll learn how to combine LangGraph's workflow orchestration with the scalability of Vertex AI to create a custom generative AI application that can provide product details and recommendations. You will: * Build a product recommendation agent using LangGraph and Google's Agent Engine * Define custom tools for product information retrieval * Deploy the agent to Vertex AI for scalable execution * Instrument the agent with Phoenix for comprehensive tracing By the end of this tutorial, you'll have the skills and knowledge to build and deploy your own custom generative AI applications using LangGraph, Agent Engine, and Vertex AI ## Notebook Walkthrough  We will go through key code snippets on this page. To follow the full tutorial, check out the notebook above.  ## Define Product Recommendation Tools Create custom Python functions that act as tools your AI agent can use to provide product information. ```python def get_product_details(product_name: str): """Gathers basic details about a product.""" details = { "smartphone": "A cutting-edge smartphone with advanced camera features and lightning-fast processing.", "coffee": "A rich, aromatic blend of ethically sourced coffee beans.", "shoes": "High-performance running shoes designed for comfort, support, and speed.", "headphones": "Wireless headphones with advanced noise cancellation technology for immersive audio.", "speaker": "A voice-controlled smart speaker that plays music, sets alarms, and controls smart home devices.", } return details.get(product_name, "Product details not found.") ``` ## Define Router Logic Set up routing logic to control conversation flow and tool selection based on user input. ```python def router(state: list[BaseMessage]) -> Literal["get_product_details", "__end__"]: """Initiates product details retrieval if the user asks for a product.""" # Get the tool_calls from the last message in the conversation history. tool_calls = state[-1].tool_calls # If there are any tool_calls if len(tool_calls): # Return the name of the tool to be called return "get_product_details" else: # End the conversation flow. return "__end__" ``` ## Build the LangGraph Application Define your LangGraph application as a custom template in Agent Engine with Phoenix instrumentation. ```python class SimpleLangGraphApp: def __init__(self, project: str, location: str) -> None: self.project_id = project self.location = location # The set_up method is used to define application initialization logic def set_up(self) -> None: # Phoenix code begins from phoenix.otel import register register( project_name="google-agent-framework-langgraph", # name this to whatever you would like auto_instrument=True, # this will automatically call all openinference libraries (e.g. openinference-instrumentation-langchain) ) # Phoenix code ends model = ChatVertexAI(model="gemini-2.0-flash") builder = MessageGraph() model_with_tools = model.bind_tools([get_product_details]) builder.add_node("tools", model_with_tools) tool_node = ToolNode([get_product_details]) builder.add_node("get_product_details", tool_node) builder.add_edge("get_product_details", END) builder.set_entry_point("tools") builder.add_conditional_edges("tools", router) self.runnable = builder.compile() # The query method will be used to send inputs to the agent def query(self, message: str): """Query the application. Args: message: The user message. Returns: str: The LLM response. """ chat_history = self.runnable.invoke(HumanMessage(message)) return chat_history[-1].content ``` ## Test the Agent Locally Test your LangGraph app locally before deployment to ensure it behaves as expected. ```python agent = SimpleLangGraphApp(project=PROJECT_ID, location=LOCATION) agent.set_up() ``` ```python agent.query(message="Get product details for shoes") ``` ```python agent.query(message="Get product details for coffee") ``` ```python agent.query(message="Get product details for smartphone") ``` ```python # Ask a question that cannot be answered using the defined tools agent.query(message="Tell me about the weather") ``` ## Deploy to Agent Engine Deploy your LangGraph application to Agent Engine for scalable execution and remote access. ```python remote_agent = agent_engines.create( SimpleLangGraphApp(project=PROJECT_ID, location=LOCATION), requirements=[ "google-cloud-aiplatform[agent_engines,langchain]==1.87.0", "cloudpickle==3.0.0", "pydantic==2.11.2", "langgraph==0.2.76", "httpx", "arize-phoenix-otel>=0.9.0", "openinference-instrumentation-langchain>=0.1.4", ], display_name="Agent Engine with LangGraph", description="This is a sample custom application in Agent Engine that uses LangGraph", extra_packages=[], ) ``` ## Test the Deployed Agent Test your deployed agent in the remote environment to verify it works correctly in production. ```python remote_agent.query(message="Get product details for shoes") ``` ```python remote_agent.query(message="Get product details for coffee") ``` ```python remote_agent.query(message="Get product details for smartphone") ``` ```python remote_agent.query(message="Tell me about the weather") ``` ## Inspect Traces in Phoenix After running your agent, you can inspect the trace data in Phoenix to understand: * How the agent processes user queries * Which tools are called and when * The reasoning process behind tool selection * Performance metrics and latency * The complete conversation flow from query to response The trace data will show you the complete flow of the product recommendation agent, from initial query processing to final response generation, giving you insights into the agent's decision-making process. ## Clean Up Resources After you've finished experimenting, clean up your cloud resources to avoid unexpected charges. ```python remote_agent.delete() ``` ## Next Steps As next steps, you can: * Expand the agent's capabilities by adding more product categories and tools * Implement more sophisticated routing logic for complex queries * Add evaluation metrics to measure the agent's performance * Analyze the trace data to optimize the agent's decision-making process * Extend the agent to handle multi-turn conversations and product comparisons

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server