ARIZE HOLIDAY SPECIAL
December 15, 2023
Let’s get into the LLM spirit together! Join us for live, virtual sessions focused on prompt engineering, search and retrieval workflows, and LLM system evaluations. Each session is designed to be a hands-on workshop where participants will apply technical skills to construct evaluation approaches for Retrieval-Augmented Generation (RAG) systems and generative AI language models.
Topics include
This tutorial will look into building a RAG pipeline and evaluating it with Phoenix Evals.
It will include: Understanding Retrieval Augmented Generation (RAG), Building RAG (with the help of a framework such as LlamaIndex), and Evaluating RAG with Phoenix Evals.
As Large Language Models (LLMs) revolutionize data science with generative use cases, their real-world application challenges traditional evaluation methods built for discriminative use cases.
A practical guide to constructing a Retrieval-Augmented Generation (RAG) model using the LangChain framework. We’ll cover the essentials of RAG, its integration with LLMs, and the unique advantages it offers in natural language processing.
Including insights from PromptLayer’s collaboration with top-tier teams, this talk highlights the need for iteration over “silver-bullet” MLOps-style evaluations. “Vibe-based” evaluation is the scientific process, try a prompt and check the output.