Skip to content

Neural Tangent Kernel Adaptive Loss #501

@zoemcc

Description

@zoemcc

Implementing the Neural Tangent Kernel adaptive loss method proposed in the "When and Why PINNs Fail to Train: A Neural Tangent Kernel Perspective" paper by Sifan Wang, Xinling Yu, Paris Perdikaris. There is a github repo that should guide implementation.

The algorithm is Algorithm 1 in the paper. The algorithm should be implemented as a concrete subtype of AbstractAdaptiveLoss so that it fits within our pre-existing code gen infrastructure in the discretize_inner_functions function. The definition of the K kernels is in Lemma 3.1.

(i.e.)

struct NeuralTangentKernelAdaptiveLoss <: AbstractAdaptiveLoss
...
end

This paper is slightly harder than some of the other adaptive loss methods to implement in our system, but not that much harder. The definition of K requires a selection of points from each domain, and so that could be generated via a grid or stochastic or quasi-random. The implementation provided on their github seems to have used a Grid strategy, but I don't see why that must always be the case for this quantity (it seems arbitrary). Thus, most of the difficulty in implementation is just figuring out the best way to have this own type maintain its own samples that are possibly different from the main PDE domain samplers for the toplevel PINN, and then calculating the kernel quantities using those points and the internal generated PDE functions. There is a ton of interesting theory in this paper but the implementing the algorithm mainly relies on understanding how to compute the K kernels.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions