Tracing a LangGraph Application with Agent Engine in Vertex AI

This notebook is adapted from Google's "Building and Deploying a LangGraph Application with Agent Engine in Vertex AI" | Original Author: Kristopher Overholt

Overview

Agent Engine is a managed service that helps you to build and deploy agent frameworks. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows.

This notebook demonstrates how to build, deploy, and test a simple LangGraph application using Agent Engine in Vertex AI. You'll learn how to combine LangGraph's workflow orchestration with the scalability of Vertex AI, which enables you to build custom generative AI applications.

Note that the approach used in this notebook defines a custom application template in Agent Engine, which can be extended to LangChain or other orchestration frameworks. If just want to use Agent Engine to build agentic generative AI applications, refer to the documentation for developing with the LangChain template in Agent Engine.

This notebook covers the following steps:

  • Define Tools: Create custom Python functions to act as tools your AI application can use.

  • Define Router: Set up routing logic to control conversation flow and tool selection.

  • Build a LangGraph Application: Structure your application using LangGraph, including the Gemini model and custom tools that you define.

  • Local Testing: Test your LangGraph application locally to ensure functionality.

  • Deploying to Vertex AI: Seamlessly deploy your LangGraph application to Agent Engine for scalable execution.

  • Remote Testing: Interact with your deployed application through Vertex AI, testing its functionality in a production-like environment.

  • Cleaning Up Resources: Delete your deployed application on Vertex AI to avoid incurring unnecessary charges.

By the end of this notebook, you'll have the skills and knowledge to build and deploy your own custom generative AI applications using LangGraph, Agent Engine, and Vertex AI.

Get started

Install Vertex AI SDK and other required packages

%pip install --upgrade --user --quiet \
    "google-cloud-aiplatform[agent_engines,langchain]" \
    cloudpickle \
    pydantic \
    langgraph \
    httpx \
    arize-otel \
    openinference-instrumentation-langchain

Restart runtime

To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which restarts the current kernel.

The restart might take a minute or longer. After it's restarted, continue to the next step.

import IPython

app = IPython.Application.instance()
app.kernel.do_shutdown(True)

⚠️ The kernel is going to restart. Wait until it's finished before continuing to the next step. ⚠️

Authenticate your notebook environment (Colab only)

If you're running this notebook on Google Colab, run the cell below to authenticate your environment.

import sys

if "google.colab" in sys.modules:
    from google.colab import auth

    auth.authenticate_user()

Set Google Cloud project information and initialize Vertex AI SDK

To get started using Vertex AI, you must have an existing Google Cloud project and enable the Vertex AI API.

Learn more about setting up a project and a development environment.

PROJECT_ID = ""  # @param {type:"string"}
LOCATION = ""  # @param {type:"string"}
STAGING_BUCKET = ""  # @param {type:"string"}

import vertexai

vertexai.init(project=PROJECT_ID, location=LOCATION, staging_bucket=STAGING_BUCKET)

Building and deploying a LangGraph app on Agent Engine

In the following sections, we'll walk through the process of building and deploying a LangGraph application using Agent Engine in Vertex AI.

Import libraries

Import the necessary Python libraries. These libraries provide the tools we need to interact with LangGraph, Vertex AI, and other components of our application.

from typing import Literal

from langchain_core.messages import BaseMessage, HumanMessage
from langchain_google_vertexai import ChatVertexAI
from langgraph.graph import END, MessageGraph
from langgraph.prebuilt import ToolNode
from vertexai import agent_engines

Define tools

You'll start by defining the a tool for your LangGraph application. You'll define a custom Python function that act as tools in our agentic application.

In this case, we'll define a simple tool that returns a product description based on the product that the user asks about. In reality, you can write functions to call APIs, query databases, or anything other tasks that you might want your agent to be able to use.

def get_product_details(product_name: str):
    """Gathers basic details about a product."""
    details = {
        "smartphone": "A cutting-edge smartphone with advanced camera features and lightning-fast processing.",
        "coffee": "A rich, aromatic blend of ethically sourced coffee beans.",
        "shoes": "High-performance running shoes designed for comfort, support, and speed.",
        "headphones": "Wireless headphones with advanced noise cancellation technology for immersive audio.",
        "speaker": "A voice-controlled smart speaker that plays music, sets alarms, and controls smart home devices.",
    }
    return details.get(product_name, "Product details not found.")

Define router

Then, you'll define a router to control the flow of the conversation, determining which tool to use based on user input or the state of the interaction. Here we'll use a simple router setup, and you can customize the behavior of your router to handle multiple tools, custom logic, or multi-agent workflows.

def router(state: list[BaseMessage]) -> Literal["get_product_details", "__end__"]:
    """Initiates product details retrieval if the user asks for a product."""
    # Get the tool_calls from the last message in the conversation history.
    tool_calls = state[-1].tool_calls
    # If there are any tool_calls
    if len(tool_calls):
        # Return the name of the tool to be called
        return "get_product_details"
    else:
        # End the conversation flow.
        return "__end__"

Set Arize AX variables: Space ID, Api Key.

You'll need to set Arize variables below to send traces to the Arize AX Platform. Sign up for free here.

Define LangGraph application

Now you'll bring everything together to define your LangGraph application as a custom template in Agent Engine.

This application will use the tool and router that you just defined. LangGraph provides a powerful way to structure these interactions and leverage the capabilities of LLMs.

class SimpleLangGraphApp:
    def __init__(self, project: str, location: str) -> None:
        self.project_id = project
        self.location = location

    # The set_up method is used to define application initialization logic
    def set_up(self) -> None:

        #arize instrumentation begin
        from arize.otel import register
        tracer_provider = register(
            space_id = "<INSERT YOUR SPACE KEY>", # in app space settings page
            api_key = "<INSERT YOUR ARIZE API KEY>", # in app space settings page
            project_name = "agent-framework-langgraph", # name this to whatever you would like
        )
        from openinference.instrumentation.langchain import LangChainInstrumentor
        LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
        #arize instrumentation end

        model = ChatVertexAI(model="gemini-2.0-flash")

        builder = MessageGraph()

        model_with_tools = model.bind_tools([get_product_details])
        builder.add_node("tools", model_with_tools)

        tool_node = ToolNode([get_product_details])
        builder.add_node("get_product_details", tool_node)
        builder.add_edge("get_product_details", END)

        builder.set_entry_point("tools")
        builder.add_conditional_edges("tools", router)

        self.runnable = builder.compile()

    # The query method will be used to send inputs to the agent
    def query(self, message: str):
        """Query the application.

        Args:
            message: The user message.

        Returns:
            str: The LLM response.
        """
        chat_history = self.runnable.invoke(HumanMessage(message))

        return chat_history[-1].content

Local testing

In this section, you'll test your LangGraph app locally before deploying it to ensure that it behaves as expected before deployment.

agent = SimpleLangGraphApp(project=PROJECT_ID, location=LOCATION)
agent.set_up()
agent.query(message="Get product details for shoes")
agent.query(message="Get product details for coffee")
agent.query(message="Get product details for smartphone")
# Ask a question that cannot be answered using the defined tools
agent.query(message="Tell me about the weather")

Deploy your LangGraph app

Now that you verified that your LangGraph application is working locally, it's time to deploy it to Agent Engine! This will make your application accessible remotely and allow you to integrate it into larger systems or provide it as a service.

remote_agent = agent_engines.create(
    SimpleLangGraphApp(project=PROJECT_ID, location=LOCATION),
    display_name="Agent Engine with LangGraph",
    description="This is a sample custom application in Agent Engine that uses LangGraph",
    extra_packages=[],
)

Remote test

Now that your LangGraph app is running on Agent Engine, let's test it out by querying it in the remote environment:

remote_agent.query(message="Get product details for shoes")
remote_agent.query(message="Get product details for coffee")
remote_agent.query(message="Get product details for smartphone")
remote_agent.query(message="Tell me about the weather")

Cleaning up

After you've finished experimenting, it's a good practice to clean up your cloud resources. You can delete the deployed Agent Engine instance to avoid any unexpected charges on your Google Cloud account.

remote_agent.delete()

Last updated

Was this helpful?