November 7th & 14th, 2023
9:00am PT
Virtual
Join us, November 7th and November 14th for this free, 2-part, virtual workshop where participants will have hands-on experience in understanding LLM evaluation metrics.
In Part 1 of this workshop, we will discuss how implementing LLM evaluations provide scalability, flexibility, and consistency, for your LLM orchestration framework. In Part 2, we will dive into a code-along Google Colab notebook to tackle adding evaluations to your LLM outputs. Attendees will walk away with the ability to implement LLM observability for their LLM application.
Key Objectives:
- Deep-dive into how performance metrics can make LLMs more ethical, safe, and reliable.
- Using custom and predefined metrics, such as accuracy, fluency, and coherence, to measure the model’s performance.
- Gain hands-on experience for how to leverage open source tools like Phoenix, LlamaIndex and LangChain for LLM application building and maintenance.