NeurIPS

NeurIPS workshop on
Interpretable Inductive Biases
and Physically Structured Learning

December 12th, 2020

Click here to access the workshop

Abstract

Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that models can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby avoiding overfitting and making the model easier to understand for scientists and non-machine-learning experts. In this workshop, we bring together researchers from different application areas to study the inductive biases that can be used to obtain interpretable models. We invite speakers from physics, robotics, and other related areas to share their ideas and success stories on inductive biases. We also invite researchers to submit extended abstracts for contributed talks and posters to initiate lively discussion on inductive biases and foster this growing community of researchers.

Schedule

Time (UTC) Event
14:30 - 14:35 Introduction and opening remarks
14:35 - 14:50 Contributed Talk: Thomas Pierrot - Learning Compositional Neural Programs for Continuous Control
14:50 - 15:10 Invited Talk: Jessica Hamrick - Structured Computation and Representation in Deep Reinforcement Learning
15:10 - 15:25 Contributed Talk: Manu Kalia - Deep learning of normal form autoencoders for universal, parameter-dependent dynamics
15:25 - 15:50 Invited Talk: Rose Yu - Physics-Guided AI for Learning Spatiotemporal Dynamics
15:50 - 16:05 Contributed Talk: Ferran Alet - Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
16:05 - 17:00 Poster Session: gather.town
17:00 - 17:25 Invited Talk: Frank Noé - PauliNet: Deep Neural Network Solution of the Electronic Schrödinger Equation
17:25 - 17:40 Contributed Talk: Kimberly Stachenfeld - Graph Networks with Spectral Message Passing
17:40 - 18:10 Invited Talk: Franziska Meier - Inductive Biases for Models and Learning-to-Learn
18:10 - 18:25 Contributed Talk: Rui Wang - Shapley Explanation Networks
18:25 - 18:55 Invited Talk: Jeanette Bohg - One the Role of Hierarchies for Learning Manipulation Skills
18:55 - 20:00 Panel Discussion
20:00 - 21:00 Poster Session: gather.town
21:00 - 21:15 Contributed Talk: Liwei Chen - Deep Learning Surrogates for Computational Fluid Dynamics
21:15 - 22:15 Invited Talk: Maziar Raissi
22:15 - 22:30 Closing Remarks

All times are in Coordinated Universal Time (UTC)

Accepted Papers

Authors Title  
Giorgio Giannone, Asha Anoosheh, Alessio Quaglino, Pierluca D’Oro, Marco Gallieri, Jonathan Masci Real-time Classification from Short Event-CameraStreams using Input-filtering Neural ODEs  
Benjamin K Miller, Mario Geiger, Tess Smidt, Frank Noé Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties  
Dharma KC, Chicheng Zhang Improving the trustworthiness of image classification models by utilizing bounding-box annotations  
Marie Déchelle, Jeremie DONA, Kevin Plessis-Fraissard, Patrick Gallinari, Marina Levy Bridging Dynamical Models and Deep Networks to Solve Forward and Inverse Problems  
Sangwon Kim, Mira Jeong, Byoung Chul Ko Is the Surrogate Model Interpretable?  
Chacha Chen, Guanjie Zheng, Hua Wei, Zhenhui Li Physics-informed Generative Adversarial Networks for Sequence Generation with Limited Data  
Li-Wei Chen, Xiangyu Hu, Berkay Alp Cakal, Nils Thuerey Deep Learning Surrogates for Computational Fluid Dynamics  
Matthew Painter, Adam Prugel-Bennett, Jonathon Hare On the Structure of Cyclic Linear Disentangled Representations  
Rui Wang, Xiaoqian Wang, David I Inouye Shapley Explanation Networks  
Ričards Marcinkevičs, Julia Vogt Interpretable Models for Granger Causality Using Self-explaining Neural Networks  
Patrick Emami, Pan He, Anand Rangarajan, Sanjay Ranka A Symmetric and Object-Centric World Model for Stochastic Environments  
Benjamin Wild, David M Dormagen, Michael L Smith, Tim Landgraf Individuality in the hive - Learning to embed lifetime social behavior of honey bees  
Jiaxin Zhang, Congjie Wei, Chenglin Wu Thermodynamic Consistent Neural Networks for Learning Material Interfacial Mechanics  
Grégoire Mialon, Dexiong Chen, Alexandre d’Aspremont, Julien Mairal A Trainable Optimal Transport Embedding for Feature Aggregation  
Kimberly Stachenfeld, Jonathan Godwin, Peter Battaglia Graph Networks with Spectral Message Passing  
Sanghoon Myung, Hyunjae Jang, Byungseon Choi, Jisu Ryu, Hyuk Kim, Sang Wuk Park, Changwook Jeong, Dae Sin Kim A novel approach for semiconductor etching process with inductive biases  
Sebastian Kaltenbach, Phaedon-Stelios Koutsourelakis Physics-aware, data-driven discovery of slow and stable coarse-grained dynamics for high-dimensional multiscale systems  
Anthony Bourached, Ryan-Rhys Griffiths, Robert Gray, Ashwani Jha, Parashkev Nachev Generative Model-Enhanced Human Motion Prediction  
Tatiana Lopez Guevara, Michael Burke, Kartic Subr, Nicholas K Taylor IV-Posterior: Inverse Value Estimation forInterpretable Policy Certificates  
Nitin Kamra, Yan Liu Gradient-based Optimization for Multi-resource Spatial Coverage  
Rui Wang, Danielle Maddix, Christos Faloutsos, Yuyang Wang, Rose Yu Learning Dynamical Systems Requires Rethinking Generalization  
Nima Dehmamy, Robin Walters, Yanchen Liu, Rose Yu Lie Algebra Convolutional Networks with Automatic Symmetry Extraction  
Robin Rombach, Patrick Esser, Bjorn Ommer An Image is Worth 16 × 16 Tokens: Visual Priors for Efficient Image Synthesis with Transformers  
Thao Nguyen, Maithra Raghu, Simon Kornblith Uncovering How Neural Network Representations Vary with Width and Depth  
Mario A Lino, Chris Cantwell, Anil Anthony Bharath, Eduardo Pignatelli, Stathi Fotiadis Simulating Surface Wave Dynamics with Convolutional Networks  
Amartya Sanyal, Puneet Dokania, Varun Kanade, Philip Torr Choice of Representation Matters for Adversarial Robustness  
Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath Ramasamy Selvaraju SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency  
Thomas Pierrot, Feryal Behbahani, Karim Beguir, Nando de Freitas Learning Compositional Neural Programs for Continuous Control  
Augustin Harter, Andrew Melnik, Gaurav Kumar, Dhruv Agarwal, Helge Ritter Solving Physics Puzzles by Reasoning about Paths  
Luz M. Blaz, Moises Arizpe Modelling Advertising Awareness, an Interpretable and Differentiable Approach  
Hannes Bergkvist, Peter Exner, Paul Davidsson Constraining neural networks output by an interpolating loss function with region priors  
Ellen Rushe, Brian Mac Namee Deep Context-Aware Novelty Detection  
Ferran Alet, Kenji Kawaguchi, Maria Bauza Villalonga, Nurullah Giray Kuru, Tomas Lozano-Perez, Leslie Kaelbling Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time  
Pranay Pasula Complex Skill Acquisition through Simple Skill Imitation Learning  
Manu Kalia, Steven L. Brunton, Hil G.E. Meijer, Christoph Brune, Nathan Kutz Deep learning of normal form autoencoders for universal, parameter-dependent dynamics