Les Houches 2022 Special Issue: Editorial
- Read the special issue paper, there are several talks that might be interesting for me to watch the youtube videos. Moreover, those lecturers are worth digging in depth their research works, who start to focusing on theory of LLMs recently.
- Nathan Srebro (TTI Chicago) [videos] introduced the fundamental concepts of the
theory of deep learning from the computer science side ‘Applying statistical learning
theory to deep learning’.
- Andrea Montanari (Stanford) [videos] gave a series of lectures summarizing a large
part of the progress obtained in the last few years in ‘Neural networks from a
nonparametric viewpoint’ including results about generalization properties of kernel methods, random features and neural tangent kernel.
- Boaz Barak (Harvard) [videos] discussed key concepts from computation complexity for problems arising in tensor estimation up to deep learning in a lecture
called ‘Computational Complexity of Deep learning: Fundamental limitations and
Empirical phenomena’.
- Yasaman Bahri (Google) and Boris Hanin (Princeton) [videos] described the behaviour of infinite-width neural networks, as well as their corrections and extensions,
in a lecture called ‘Deep Learning at Large and Infinite Width’.
- Julia Kempe (NYU) [videos] gave us a series of lectures on ‘Data, Physics and
Kernels and how can (statistical) physics tools help the DL practitioner ’ spanning
several currently active topics such as adversarial training.
- Matthieu Wyart (EPFL) [videos] presented a seminar titled ‘Loss landscape, overparametrization and curse of dimensionality in deep learning’ that was not collected
for the lecture notes.