In this talk, UC Berkeley researcher Shishir Patil explores an innovative approach to integrating Large Language Models (LLMs) with various tools via APIs. Bridging LLMs with APIs presents a significant challenge, primarily because of the models’ struggles to generate precise input arguments and their propensity to hallucinate API calls. Gorilla LLM, trained with their novel Retriever-Aware-Training (RAT), surpasses the performance of all open-sourced LLMs on writing API calls. Gorilla presents a novel PL-inspired metric to measure hallucination, commonly encountered in LLMs. Gorilla is an open-source project having served hundreds of thousand user requests, with enterprise adoption, and an energetic community supporting it. He also spotlights the Berkeley Function Calling Leaderboard to evaluate an LLM’s ability to call functions (tools) accurately. He concludes with learnings from our deployment experiences, and present open research questions to enable wider integration of LLMs in applications.