ARIZE HOLIDAY SPECIAL

December 15, 2023

Let’s get into the LLM spirit together! Join us for live, virtual sessions focused on prompt engineering, search and retrieval workflows, and LLM system evaluations. Each session is designed to be a hands-on workshop where participants will apply technical skills to construct evaluation approaches for Retrieval-Augmented Generation (RAG) systems and generative AI language models.

   

Save your seat

Topics include

From RAGtag to RAGing: Evaluating Search and Retrieval Use-cases with Phoenix Tracing

This tutorial will look into building a RAG pipeline and evaluating it with Phoenix Evals.
It will include: Understanding Retrieval Augmented Generation (RAG), Building RAG (with the help of a framework such as LlamaIndex), and Evaluating RAG with Phoenix Evals.

Constructing an Evaluation Approach for Generative AI Models

As Large Language Models (LLMs) revolutionize data science with generative use cases, their real-world application challenges traditional evaluation methods built for discriminative use cases.

Building your own RAGs with LangChain

A practical guide to constructing a Retrieval-Augmented Generation (RAG) model using the LangChain framework. We’ll cover the essentials of RAG, its integration with LLMs, and the unique advantages it offers in natural language processing.

Vibe-Based Prompt Engineering

Including insights from PromptLayer’s collaboration with top-tier teams, this talk highlights the need for iteration over “silver-bullet” MLOps-style evaluations. “Vibe-based” evaluation is the scientific process, try a prompt and check the output.

Speakers

Amber Roberts

ML Growth Lead,
Arize AI

Madhav Thaker

Senior Data Scientist,
Shopify

Jared Zoneraich

Founder,
PromptLayer

Rajiv Shah

Machine Learning Engineer,
HuggingFace

Registration is open

Save your seat