NeurIPS workshop on
Interpretable Inductive Biases
and Physically Structured Learning
December 12th, 2020
Click here to access the workshop
Abstract
Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that models can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby avoiding overfitting and making the model easier to understand for scientists and non-machine-learning experts. In this workshop, we bring together researchers from different application areas to study the inductive biases that can be used to obtain interpretable models. We invite speakers from physics, robotics, and other related areas to share their ideas and success stories on inductive biases. We also invite researchers to submit extended abstracts for contributed talks and posters to initiate lively discussion on inductive biases and foster this growing community of researchers.
Schedule
Time (UTC) | Event |
---|---|
14:30 - 14:35 | Introduction and opening remarks |
14:35 - 14:50 | Contributed Talk: Thomas Pierrot - Learning Compositional Neural Programs for Continuous Control |
14:50 - 15:10 | Invited Talk: Jessica Hamrick - Structured Computation and Representation in Deep Reinforcement Learning |
15:10 - 15:25 | Contributed Talk: Manu Kalia - Deep learning of normal form autoencoders for universal, parameter-dependent dynamics |
15:25 - 15:50 | Invited Talk: Rose Yu - Physics-Guided AI for Learning Spatiotemporal Dynamics |
15:50 - 16:05 | Contributed Talk: Ferran Alet - Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time |
16:05 - 17:00 | Poster Session: gather.town |
17:00 - 17:25 | Invited Talk: Frank Noé - PauliNet: Deep Neural Network Solution of the Electronic Schrödinger Equation |
17:25 - 17:40 | Contributed Talk: Kimberly Stachenfeld - Graph Networks with Spectral Message Passing |
17:40 - 18:10 | Invited Talk: Franziska Meier - Inductive Biases for Models and Learning-to-Learn |
18:10 - 18:25 | Contributed Talk: Rui Wang - Shapley Explanation Networks |
18:25 - 18:55 | Invited Talk: Jeanette Bohg - One the Role of Hierarchies for Learning Manipulation Skills |
18:55 - 20:00 | Panel Discussion |
20:00 - 21:00 | Poster Session: gather.town |
21:00 - 21:15 | Contributed Talk: Liwei Chen - Deep Learning Surrogates for Computational Fluid Dynamics |
21:15 - 22:15 | Invited Talk: Maziar Raissi |
22:15 - 22:30 | Closing Remarks |