Mar 15, 2023
Ilya Sutskever, a cofounder and chief scientist of OpenAI and one of the primary minds behind the large language model GPT-4 and it’s public progeny, ChatGPT, talks about AI hallucinations and his vision of AI democracy.
Mar 2, 2023
In this episode, Ben Sorscher, a PhD student at Stanford, talks about reducing the size of data sets used to train models, particularly large language models, which are pushing the limits of scaling because of the enormous cost of training and the environmental impact of generating the electricity they consume.
Feb 16, 2023
Yann LeCun talks about what's missing in large language models and about his new joint embedding predictive architecture may be a step toward filling that gap.
Feb 1, 2023
Terry Sejnowski, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines - whose sleep-wake cycle has been repurposed in Geoff Hinton's new Forward-Forward algorithm, talks in this episode about the NeurIPS conference, and how advances in deep learning may help us understand our own brains.
Jan 19, 2023
Geoffrey Hinton gives a deep dive into his new learning algorithm, which he calls the forward-forward algorithm, a more plausible model for how the cerebral cortex might learn.