ALL SESSIONS

AI Research Frontiers

Cutting-edge research, emerging techniques, and theoretical advancements in AI observability and LLM evaluation. Explore the latest findings and methodologies pushing the boundaries of what’s possible in understanding, evaluating, and improving AI-powered systems.

AI Research Frontiers

Cutting-edge research, emerging techniques, and theoretical advancements in AI observability and LLM evaluation. Explore the latest findings and methodologies pushing the boundaries of what’s possible in understanding, evaluating, and improving AI-powered systems.

Rohan Taori
PhD Student

Making Language Modeling Accessible with Synthetic Data

Nik Spirin
Nik Spirin
Director of GenAI and LLMOps

The Goldilocks Approach to LLMs: Balancing Accuracy, Latency, and Cost for Optimal Performance

Joe Palermo
Joe Palermo
Member of Technical Staff

Customizing GPT-4

Sandeep Subramanian
Sandeep Subramanian
Research Scientist

CodeStral: LLMs for Code from MistralAI

Prakash Chockalingam
Prakash Chockalingam
Product Manager

MLOps for GenAI: Lessons Learnt from Building and Deploying Gemini Fine-Tuned Models at Google

Bethany Wang
Bethany Wang
Staff Software Engineer

Foundation Model Evaluation: Metrics, AutoRaters and Data

Charles Packer
Charles Packer
PhD Candidate at UC Berkeley

Building the LLM OS

Ash Vardanian
Ash Vardanian
Founder @ Unum

Semantic Search and Retrieval Augmentation at Scale

Tianjun Zhang
Tianjun Zhang
PhD Candidate at UC Berkeley

Gorilla LLM: System and Algorithm for Building LLM Agents with Tools

Shishir Patil
Shishir G Patil
PhD Candidate at UC Berkeley

Teaching LLMs to Use Tools at Scale

Register for Arize:Observe

AI Builders’ Guild

Cutting-edge research, emerging techniques, and theoretical advancements in AI observability and LLM evaluation. Explore the latest findings and methodologies pushing the boundaries of what’s possible in understanding, evaluating, and improving AI-powered systems.

AI Builders’ Guild

Cutting-edge research, emerging techniques, and theoretical advancements in AI observability and LLM evaluation. Explore the latest findings and methodologies pushing the boundaries of what’s possible in understanding, evaluating, and improving AI-powered systems.

Devin Stein
Devin Stein
Founder

Continual Learning in Agents

Ofer Mendelevitch
Ofer Mendelevitch
Head of Developer Relations

Trust and Accuracy in RAG: The Journey to Enterprise-Scale Applications

Denys Linkov
Denys Linkov
Head of Machine Learning

Product impacts of editable prompts

Hakan Tekgul
Solutions Architect

Evaluating and Tracing a Multi-Modal RAG Application

Jerry Liu
Co-founder/CEO

Building Advanced Question-Answering Agents Over Complex Data

Gabriel Paunescu
Gabriel Paunescu
Founder

Multi Agent RAG: an agent for every role

Safeer Mohiuddin
Safeer Mohiuddin
Co-Founder, Guardrails AI

Cutting through the LLM Eval Noise

Yujian Tang
Yujian Tang
CEO & Founder @ OSS4AI

1001 Ways to Build an LLM App

Cyrus Nouroozi
Cyrus Nouroozi
Cofounder & CEO @ Zenbase AI​

Prompt Engineering is Dead, Long Live Prompt Engineering

Jared Zoneraich
Jared Zoneraich
Founder @ PromptLayer

Evaluation Engineering: Iterative Strategies to Prompt-Specific Evaluations

Register for Arize:Observe

AI Innovators

Explore the real world use cases, challenges in deploying products, and scaling of AI across organizations. AI leaders share learnings on industry-specific considerations to navigate the complexities of enterprise AI deployment.

AI Innovators

Explore the real world use cases, challenges in deploying products, and scaling of AI across organizations. AI leaders share learnings on industry-specific considerations to navigate the complexities of enterprise AI deployment.

Kasey Roh
Head of US Business

Full-stack LLM for Enterprise Use-cases

Facundo Santiago
Facundo Santiago
Senior Product Manager @ Azure AI

Using the right model for the right job with the Azure AI model catalog

Kristen Womack
Kristen Womack
Principal Product Manager, Azure Developer CLI

From Concept to Cloud: Building and Deploying Cloud-Native GenAI Applications

Mohamed Moustafa
Mohamed Moustafa
Founder

Keeping users happy when LLMs suck

Neeral Beladia
Neeral Beladia
Senior Member of Technical Staff (MLE)

Building Gen AI models for enterprise use cases (lessons learnt)

Kavin Karthik
Kavin Karthik
Member of Technical Staff

GPTs for Work

Anu Trivedi Austin Kerby
Panel

Finding Equilibrium: Optimizing Performance, Cost, and User Feedback in LLM Apps

Swapna Kasula James Emerson Hien Luu
Panel

Mission Critical: Scale to Conquer the Million-User Milestone

Shobhit Varshney Lavanya Ramani Vivek Gangasani
Panel

Continuous Evaluation, Continuous Improvement: Metrics and Strategies to Ensure AI Quality

Vice President, AI & Machine Learning Prateek Burman
Panel

Architecting Trust: Challenges & Strategies for GenAI Safety and Alignment

Register for Arize:Observe