10/12/23 @ 5:30pm PT
San Francisco, CA
LLM use cases are growing fast – chatbots, summarization, Q&A assistants, code generation and more. As these LLM apps are being developed and deployed to production, teams need to evaluate the performance of their LLM use case, in addition to drilling down to each trace and span to get visibility into where their application breaks. In this hands-on workshop you will learn how to build and deploy a complex LLM app with BentoML’s OpenLLM. Then, learn how to troubleshoot, evaluate and trace your LLM app with Arize Phoenix. In this workshop you will learn how to:
- Build a powerful LLM application, with a native Langchain integration, and easily serve and deploy.
- Use the Phoenix LLM Evals library designed for simple, fast and accurate LLM based evaluations.
- Troubleshoot and debug your LLM app with Phoenix Traces and Spans – to find where application broke when Langchain is used.