Abstract:
A Low Rank Neural Representation (LRNR) is a parametrized family of feedforward neural networks whose weights and biases belong to low rank linear subspaces. In this talk, we will discuss how LRNRs can serve as efficient low dimensional representations of solutions to hyperbolic conservation laws. First, we motivate the LRNR architecture by reformulating the entropy solutions to scalar conservation laws to reveal their low dimensional structure. Next, we will show that LRNRs can be trained from numerical solution data through a meta-learning approach, and demonstrate how the trained LRNRs possess important properties: (1) low dimensionality, (2) smoothness and stability with respect to the parameters even in the presence of shocks, and (3) its ability to backpropagate with complexity scaling with the low dimension only. Its applications in the popular Physics Informed Neural Networks (PINNs) framework will also be discussed.
This talk is based on joint works with Woojin Cho (Yonsei U.), Kookjin Lee (Arizona State U.), Noseong Park (KAIST), and Gerrit Welper (U. Central Florida).