
Live | Every Wednesday
10:15am PT | 45 minutes
Join us every Wednesday for an engaging discussion session where we delve into the latest technical papers, covering a range of topics including large language models (LLM), generative models, ChatGPT, and more. This recurring event offers an opportunity to collectively analyze and exchange insights on cutting-edge research in these areas and their broader implications.
This paper introduces a novel approach, DragGAN, for achieving precise control over the pose, shape, expression, and layout of objects generated by GANs. It allows users to “drag” any points of an image to specific target points — in other words, it enables the deformation of images with better control over where pixels end up to produce ultra-realistic outputs. Paper: https://arxiv.org/abs/2305.10973
View Recording: https://youtu.be/DxzsgV8rTOw
This week we’re diving into the world of Retrieval-Augmented Generation (RAG)!
We know GPT-like LLMs are great at soaking up knowledge during pre-training and fine-tuning them can lead to some pretty great, specific results. But when it comes to tasks that really demand heavy knowledge lifting, they still fall short. Plus, it’s not exactly easy to figure out where their answers come from or how to update their knowledge.
Enter RAG models, a hybrid beast that combines the best of both worlds: the learning power of pre-trained models (the parametric part), and an explicit, non-parametric memory — imagine a searchable index of all of Wikipedia.
Link to paper: https://arxiv.org/abs/2005.11401
VOYAGER, the first LLM-powered embodied lifelong learning agent in Minecraft, autonomously explores the world, acquires skills, and makes discoveries without human intervention. It outperforms previous approaches, achieving exceptional proficiency in playing Minecraft and successfully applies its learned skills to solve novel tasks in different Minecraft worlds, surpassing techniques that struggle with generalization.
Link to paper: https://arxiv.org/pdf/2305.16291.pdf