Integreat Tuesday seminar: Mohammad Emtiyaz Khan

The Bayesian learning rule

Bildet kan inneholde: person, panne, smil, hake, øyenbryn.

Title

The Bayesian Learning Rule

Abstract

Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design machines that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and machines. I will show that a wide-variety of machine-learning algorithms are instances of a single learning-rule derived from Bayesian principles. I will show our recent result on scaling up variational learning to large deep networks (e.g., GPT-2). Time permitting, I will also briefly discuss the dual perspective yielding new mechanisms for knowledge transfer in learning machines.

References

  1. The Bayesian Learning Rule, (JMLR) M.E. Khan, H. Rue (arXiv)
  2. The Memory Perturbation Equation: Understanding Model’s Sensitivity to Data, (NeurIPS 2023) P. Nickl, L. Xu, D. Tailor, T. Möllenhoff, M.E. Khan (arXiv)
  3. Variational Learning is Effective for Large Deep Networks, Y. Shen*, N. Daheim*, B. Cong, P. Nickl, G.M. Marconi, C. Bazan, R. Yokota, I. Gurevych, D. Cremers, M.E. Khan, T. Möllenhoff [ ArXiv ] [ Code ]

Bio

Dr. Mohammad Emtiyaz Khan, RIKEN center of Advanced Intelligence Project (AIP), Tokyo

Institutional homepage (external link)

Practical

The Tuesdays seminars series are devoted to various topics relevant for the Integreat´s research focus. Presenters from the Integreat community and beyond have 40 minutes to present, followed by a group discussion. 

Seminars are open for attendance for everybody.  

For those unable to attend in person: https://uio.zoom.us/j/62535744013 

Point of contacts:

 

Published Mar. 11, 2024 9:30 AM - Last modified Mar. 11, 2024 9:30 AM