Workshop

RAG Time! Evaluate RAG with LLM Evals and Benchmarking

  January 31st, 2024

  10:00am PST – 10:45am PST

Virtual

LLMs are trained on vast datasets, but do not include your specific data (things like company knowledge bases and documentation). Retrieval-Augmented Generation (RAG) addresses this by dynamically incorporating your data as context during the generation process. Learning RAG is a critical component in building applications such as chatbots or agents.

Join us on January 31st at 10am PST to learn how to utilize your data in real-time to provide more tailored and contextually relevant responses. In this 45 minute live workshop, participants can expect to implement key concepts of RAG including:
  • Indexing: In RAG, your data is loaded and prepared for queries. This process is called indexing. User queries act on this index, which filters your data down to the most relevant context. This context and your query are then sent to the LLM along with a prompt, so that your LLM can provide the most accurate and relevant response.
  • LLM Evaluations: Gain visibility into the performance of your application. In this workshop, you will use an LLM to grade whether or not the chunks retrieved are relevant to the query.

Access The Recording

Speakers

Mikyo King
Head of Open Source, Arize AI

Amber Robers
ML Growth Lead, Arize AI

Get LLM and ML observability in minutes.

Get Started