RAFT: Adapting Language Models To RAG

In the rapidly advancing field of AI, Large Language Models (LLMs) are breaking new ground. Once primarily used for providing information within dialogue systems, these models are now stepping into a realm where they can actively engage with tools and execute actions on real-world applications and services with little to no human intervention. In this talk, Tianjun Zhang presents Gorilla (tool usage) and RAFT (RAG and document understanding), a series of fine-tuned models developed for building LLM agents.

Tianjun Zhang

Gorilla LLM

Tianjun Zhang is a final-year PhD student from UC Berkeley, affiliated with Sky Lab and BAIR lab. His research interests lie in how to train and safely deploy autonomous foundation model agents. He is the lead of several successful open-source projects including: Gorilla LLM, RAFT, and LiveCodeBench. The projects are widely used by both developers and featured by companies including OpenAI, Cohere, and Replit. He works with Ion Stoica, Joseph Gonzalez, Peter Abbeel, and Sergey Levine.

Subscribe to our resources and blogs

Subscribe