This workshop will provide the necessary skills to build and analyze your LLM efficiently, enhancing performance and increasing user satisfaction. Gain valuable insights into user queries and evaluate answer quality to refine your search system and elevate the overall user experience. During the hands-on session, you will explore practical techniques such as creating a Pinecone vector index, enabling efficient query processing and retrieval. Additionally, you will have access to user query and knowledge base data, complete with embeddings from the OpenAI API. Leveraging Phoenix, the open source ML Observability library by Arize AI, you'll uncover clusters of user queries with limited or insufficient knowledge base entries.