Skip to content

Commit c20a6b6

Browse files
Merge pull request #639 from ArnoStrouwen/md
format markdown
2 parents f5642a4 + f09a1ca commit c20a6b6

25 files changed

+747
-657
lines changed

.JuliaFormatter.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
style = "sciml"
1+
style = "sciml"
2+
format_markdown = true

LICENSE.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,3 @@ The NeuralPDE.jl package is licensed under the MIT "Expat" License:
1919
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2020
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
2121
> SOFTWARE.
22-
>

README.md

Lines changed: 36 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[![Build Status](https://github.com/SciML/NeuralPDE.jl/workflows/CI/badge.svg)](https://github.com/SciML/NeuralPDE.jl/actions?query=workflow%3ACI)
88
[![Build status](https://badge.buildkite.com/fa31256f4b8a4f95fe5ab90c3bf4ef56055a2afe675435c182.svg?branch=master)](https://buildkite.com/julialang/neuralpde-dot-jl)
99

10-
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
10+
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
1111
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)
1212

1313
NeuralPDE.jl is a solver package which consists of neural network solvers for
@@ -29,19 +29,19 @@ the documentation, which contains the unreleased features.
2929

3030
## Features
3131

32-
- Physics-Informed Neural Networks for ODE, SDE, RODE, and PDE solving
33-
- Ability to define extra loss functions to mix xDE solving with data fitting (scientific machine learning)
34-
- Automated construction of Physics-Informed loss functions from a high level symbolic interface
35-
- Sophisticated techniques like quadrature training strategies, adaptive loss functions, and neural adapters
36-
to accelerate training
37-
- Integrated logging suite for handling connections to TensorBoard
38-
- Handling of (partial) integro-differential equations and various stochastic equations
39-
- Specialized forms for solving `ODEProblem`s with neural networks
40-
- Compatability with [Flux.jl](https://docs.sciml.ai/Flux.jl/stable/) and [Lux.jl](https://docs.sciml.ai/Lux/stable/)
41-
for all of the GPU-powered machine learning layers available from those libraries.
42-
- Compatability with [NeuralOperators.jl](https://docs.sciml.ai/NeuralOperators/stable/) for
43-
mixing DeepONets and other neural operators (Fourier Neural Operators, Graph Neural Operators,
44-
etc.) with physics-informed loss functions
32+
- Physics-Informed Neural Networks for ODE, SDE, RODE, and PDE solving
33+
- Ability to define extra loss functions to mix xDE solving with data fitting (scientific machine learning)
34+
- Automated construction of Physics-Informed loss functions from a high level symbolic interface
35+
- Sophisticated techniques like quadrature training strategies, adaptive loss functions, and neural adapters
36+
to accelerate training
37+
- Integrated logging suite for handling connections to TensorBoard
38+
- Handling of (partial) integro-differential equations and various stochastic equations
39+
- Specialized forms for solving `ODEProblem`s with neural networks
40+
- Compatability with [Flux.jl](https://docs.sciml.ai/Flux.jl/stable/) and [Lux.jl](https://docs.sciml.ai/Lux/stable/)
41+
for all of the GPU-powered machine learning layers available from those libraries.
42+
- Compatability with [NeuralOperators.jl](https://docs.sciml.ai/NeuralOperators/stable/) for
43+
mixing DeepONets and other neural operators (Fourier Neural Operators, Graph Neural Operators,
44+
etc.) with physics-informed loss functions
4545

4646
## Example: Solving 2D Poisson Equation via Physics-Informed Neural Networks
4747

@@ -55,52 +55,54 @@ Dxx = Differential(x)^2
5555
Dyy = Differential(y)^2
5656

5757
# 2D PDE
58-
eq = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)
58+
eq = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sin(pi * x) * sin(pi * y)
5959

6060
# Boundary conditions
61-
bcs = [u(0,y) ~ 0.0, u(1,y) ~ 0,
62-
u(x,0) ~ 0.0, u(x,1) ~ 0]
61+
bcs = [u(0, y) ~ 0.0, u(1, y) ~ 0,
62+
u(x, 0) ~ 0.0, u(x, 1) ~ 0]
6363
# Space and time domains
64-
domains = [x Interval(0.0,1.0),
65-
y Interval(0.0,1.0)]
64+
domains = [x Interval(0.0, 1.0),
65+
y Interval(0.0, 1.0)]
6666
# Discretization
6767
dx = 0.1
6868

6969
# Neural network
7070
dim = 2 # number of dimensions
71-
chain = Lux.Chain(Dense(dim,16,Lux.σ),Dense(16,16,Flux.σ),Dense(16,1))
71+
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Flux.σ), Dense(16, 1))
7272

7373
discretization = PhysicsInformedNN(chain, QuadratureTraining())
7474

75-
@named pde_system = PDESystem(eq,bcs,domains,[x,y],[u(x, y)])
76-
prob = discretize(pde_system,discretization)
75+
@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
76+
prob = discretize(pde_system, discretization)
7777

78-
callback = function (p,l)
78+
callback = function (p, l)
7979
println("Current loss is: $l")
8080
return false
8181
end
8282

83-
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters=4000)
84-
prob = remake(prob,u0=res.minimizer)
85-
res = Optimization.solve(prob, ADAM(0.01); callback = callback, maxiters=2000)
83+
res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters = 4000)
84+
prob = remake(prob, u0 = res.minimizer)
85+
res = Optimization.solve(prob, ADAM(0.01); callback = callback, maxiters = 2000)
8686
phi = discretization.phi
8787
```
8888

8989
And some analysis:
9090

9191
```julia
92-
xs,ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
93-
analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)
92+
xs, ys = [infimum(d.domain):(dx / 10):supremum(d.domain) for d in domains]
93+
analytic_sol_func(x, y) = (sin(pi * x) * sin(pi * y)) / (2pi^2)
9494

95-
u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
96-
u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
95+
u_predict = reshape([first(phi([x, y], res.minimizer)) for x in xs for y in ys],
96+
(length(xs), length(ys)))
97+
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys],
98+
(length(xs), length(ys)))
9799
diff_u = abs.(u_predict .- u_real)
98100

99101
using Plots
100-
p1 = plot(xs, ys, u_real, linetype=:contourf,title = "analytic");
101-
p2 = plot(xs, ys, u_predict, linetype=:contourf,title = "predict");
102-
p3 = plot(xs, ys, diff_u,linetype=:contourf,title = "error");
103-
plot(p1,p2,p3)
102+
p1 = plot(xs, ys, u_real, linetype = :contourf, title = "analytic");
103+
p2 = plot(xs, ys, u_predict, linetype = :contourf, title = "predict");
104+
p3 = plot(xs, ys, diff_u, linetype = :contourf, title = "error");
105+
plot(p1, p2, p3)
104106
```
105107

106108
![image](https://user-images.githubusercontent.com/12683885/90962648-2db35980-e4ba-11ea-8e58-f4f07c77bcb9.png)

docs/src/developer/debugging.md

Lines changed: 51 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -15,41 +15,40 @@ Dxx = Differential(x)^2
1515
Dtt = Differential(t)^2
1616
Dt = Differential(t)
1717
#2D PDE
18-
C=1
19-
eq = Dtt(u(x,t)) ~ C^2*Dxx(u(x,t))
18+
C = 1
19+
eq = Dtt(u(x, t)) ~ C^2 * Dxx(u(x, t))
2020

2121
# Initial and boundary conditions
22-
bcs = [u(0,t) ~ 0.,
23-
u(1,t) ~ 0.,
24-
u(x,0) ~ x*(1. - x),
25-
Dt(u(x,0)) ~ 0. ]
22+
bcs = [u(0, t) ~ 0.0,
23+
u(1, t) ~ 0.0,
24+
u(x, 0) ~ x * (1.0 - x),
25+
Dt(u(x, 0)) ~ 0.0]
2626

2727
# Space and time domains
28-
domains = [x Interval(0.0,1.0),
29-
t Interval(0.0,1.0)]
28+
domains = [x Interval(0.0, 1.0),
29+
t Interval(0.0, 1.0)]
3030

3131
# Neural network
32-
chain = FastChain(FastDense(2,16,Flux.σ),FastDense(16,16,Flux.σ),FastDense(16,1))
32+
chain = FastChain(FastDense(2, 16, Flux.σ), FastDense(16, 16, Flux.σ), FastDense(16, 1))
3333
init_params = DiffEqFlux.initial_params(chain)
3434

3535
eltypeθ = eltype(init_params)
3636
phi = NeuralPDE.get_phi(chain)
3737
derivative = NeuralPDE.get_numeric_derivative()
3838

39-
u_ = (cord, θ, phi)->sum(phi(cord, θ))
39+
u_ = (cord, θ, phi) -> sum(phi(cord, θ))
4040

41-
phi([1,2], init_params)
41+
phi([1, 2], init_params)
4242

4343
phi_ = (p) -> phi(p, init_params)[1]
44-
dphi = Zygote.gradient(phi_,[1.,2.])
44+
dphi = Zygote.gradient(phi_, [1.0, 2.0])
4545

46-
dphi1 = derivative(phi,u_,[1.,2.],[[ 0.0049215667, 0.0]],1,init_params)
47-
dphi2 = derivative(phi,u_,[1.,2.],[[0.0, 0.0049215667]],1,init_params)
48-
isapprox(dphi[1][1], dphi1, atol=1e-8)
49-
isapprox(dphi[1][2], dphi2, atol=1e-8)
46+
dphi1 = derivative(phi, u_, [1.0, 2.0], [[0.0049215667, 0.0]], 1, init_params)
47+
dphi2 = derivative(phi, u_, [1.0, 2.0], [[0.0, 0.0049215667]], 1, init_params)
48+
isapprox(dphi[1][1], dphi1, atol = 1e-8)
49+
isapprox(dphi[1][2], dphi2, atol = 1e-8)
5050

51-
52-
indvars = [x,t]
51+
indvars = [x, t]
5352
depvars = [u(x, t)]
5453
dict_depvars_input = Dict(:u => [:x, :t])
5554
dim = length(domains)
@@ -58,8 +57,12 @@ multioutput = chain isa AbstractArray
5857
strategy = NeuralPDE.GridTraining(dx)
5958
integral = NeuralPDE.get_numeric_integral(strategy, indvars, multioutput, chain, derivative)
6059

61-
_pde_loss_function = NeuralPDE.build_loss_function(eq,indvars,depvars,phi,derivative,integral,multioutput,init_params,strategy)
60+
_pde_loss_function = NeuralPDE.build_loss_function(eq, indvars, depvars, phi, derivative,
61+
integral, multioutput, init_params,
62+
strategy)
63+
```
6264

65+
```
6366
julia> expr_pde_loss_function = NeuralPDE.build_symbolic_loss_function(eq,indvars,depvars,dict_depvars_input,phi,derivative,integral,multioutput,init_params,strategy)
6467
6568
:((cord, var"##θ#529", phi, derivative, integral, u)->begin
@@ -76,11 +79,17 @@ julia> bc_indvars = NeuralPDE.get_variables(bcs,indvars,depvars)
7679
[:t]
7780
[:x]
7881
[:x]
82+
```
7983

80-
_bc_loss_functions = [NeuralPDE.build_loss_function(bc,indvars,depvars,
81-
phi,derivative,integral,multioutput,init_params,strategy,
82-
bc_indvars = bc_indvar) for (bc,bc_indvar) in zip(bcs,bc_indvars)]
84+
```julia
85+
_bc_loss_functions = [NeuralPDE.build_loss_function(bc, indvars, depvars,
86+
phi, derivative, integral, multioutput,
87+
init_params, strategy,
88+
bc_indvars = bc_indvar)
89+
for (bc, bc_indvar) in zip(bcs, bc_indvars)]
90+
```
8391

92+
```
8493
julia> expr_bc_loss_functions = [NeuralPDE.build_symbolic_loss_function(bc,indvars,depvars,dict_depvars_input,
8594
phi,derivative,integral,multioutput,init_params,strategy,
8695
bc_indvars = bc_indvar) for (bc,bc_indvar) in zip(bcs,bc_indvars)]
@@ -113,10 +122,15 @@ julia> expr_bc_loss_functions = [NeuralPDE.build_symbolic_loss_function(bc,indva
113122
end
114123
end
115124
end)
125+
```
116126

117-
train_sets = NeuralPDE.generate_training_sets(domains,dx,[eq],bcs,eltypeθ,indvars,depvars)
118-
pde_train_set,bcs_train_set = train_sets
127+
```julia
128+
train_sets = NeuralPDE.generate_training_sets(domains, dx, [eq], bcs, eltypeθ, indvars,
129+
depvars)
130+
pde_train_set, bcs_train_set = train_sets
131+
```
119132

133+
```
120134
julia> pde_train_set
121135
1-element Array{Array{Float32,2},1}:
122136
[0.1 0.2 … 0.8 0.9; 0.1 0.1 … 1.0 1.0]
@@ -128,10 +142,14 @@ julia> bcs_train_set
128142
[1.0 1.0 … 1.0 1.0; 0.0 0.1 … 0.9 1.0]
129143
[0.0 0.1 … 0.9 1.0; 0.0 0.0 … 0.0 0.0]
130144
[0.0 0.1 … 0.9 1.0; 0.0 0.0 … 0.0 0.0]
145+
```
131146

147+
```julia
148+
pde_bounds, bcs_bounds = NeuralPDE.get_bounds(domains, [eq], bcs, eltypeθ, indvars, depvars,
149+
NeuralPDE.StochasticTraining(100))
150+
```
132151

133-
pde_bounds, bcs_bounds = NeuralPDE.get_bounds(domains,[eq],bcs,eltypeθ,indvars,depvars,NeuralPDE.StochasticTraining(100))
134-
152+
```
135153
julia> pde_bounds
136154
1-element Vector{Vector{Any}}:
137155
[Float32[0.01, 0.99], Float32[0.01, 0.99]]
@@ -142,13 +160,14 @@ julia> bcs_bounds
142160
[1, Float32[0.0, 1.0]]
143161
[Float32[0.0, 1.0], 0]
144162
[Float32[0.0, 1.0], 0]
163+
```
145164

146-
discretization = NeuralPDE.PhysicsInformedNN(chain,strategy)
147-
148-
@named pde_system = PDESystem(eq,bcs,domains,indvars,depvars)
149-
prob = NeuralPDE.discretize(pde_system,discretization)
165+
```julia
166+
discretization = NeuralPDE.PhysicsInformedNN(chain, strategy)
150167

151-
expr_prob = NeuralPDE.symbolic_discretize(pde_system,discretization)
152-
expr_pde_loss_function , expr_bc_loss_functions = expr_prob
168+
@named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)
169+
prob = NeuralPDE.discretize(pde_system, discretization)
153170

171+
expr_prob = NeuralPDE.symbolic_discretize(pde_system, discretization)
172+
expr_pde_loss_function, expr_bc_loss_functions = expr_prob
154173
```

docs/src/examples/3rd.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -25,29 +25,29 @@ import ModelingToolkit: Interval, infimum, supremum
2525
Dxxx = Differential(x)^3
2626
Dx = Differential(x)
2727
# ODE
28-
eq = Dxxx(u(x)) ~ cos(pi*x)
28+
eq = Dxxx(u(x)) ~ cos(pi * x)
2929
3030
# Initial and boundary conditions
31-
bcs = [u(0.) ~ 0.0,
32-
u(1.) ~ cos(pi),
33-
Dx(u(1.)) ~ 1.0]
31+
bcs = [u(0.0) ~ 0.0,
32+
u(1.0) ~ cos(pi),
33+
Dx(u(1.0)) ~ 1.0]
3434
3535
# Space and time domains
36-
domains = [x ∈ Interval(0.0,1.0)]
36+
domains = [x ∈ Interval(0.0, 1.0)]
3737
3838
# Neural network
39-
chain = Lux.Chain(Dense(1,8,Lux.σ),Dense(8,1))
39+
chain = Lux.Chain(Dense(1, 8, Lux.σ), Dense(8, 1))
4040
4141
discretization = PhysicsInformedNN(chain, QuasiRandomTraining(20))
42-
@named pde_system = PDESystem(eq,bcs,domains,[x],[u(x)])
43-
prob = discretize(pde_system,discretization)
42+
@named pde_system = PDESystem(eq, bcs, domains, [x], [u(x)])
43+
prob = discretize(pde_system, discretization)
4444
45-
callback = function (p,l)
45+
callback = function (p, l)
4646
println("Current loss is: $l")
4747
return false
4848
end
4949
50-
res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters=2000)
50+
res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters = 2000)
5151
phi = discretization.phi
5252
```
5353

@@ -56,16 +56,16 @@ We can plot the predicted solution of the ODE and its analytical solution.
5656
```@example 3rdDerivative
5757
using Plots
5858
59-
analytic_sol_func(x) = (π*x*(-x+(π^2)*(2*x-3)+1)-sin(π*x))/(π^3)
59+
analytic_sol_func(x) = (π * x * (-x + (π^2) * (2 * x - 3) + 1) - sin(π * x)) / (π^3)
6060
6161
dx = 0.05
62-
xs = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains][1]
63-
u_real = [analytic_sol_func(x) for x in xs]
64-
u_predict = [first(phi(x,res.u)) for x in xs]
62+
xs = [infimum(d.domain):(dx / 10):supremum(d.domain) for d in domains][1]
63+
u_real = [analytic_sol_func(x) for x in xs]
64+
u_predict = [first(phi(x, res.u)) for x in xs]
6565
6666
x_plot = collect(xs)
67-
plot(x_plot ,u_real,title = "real")
68-
plot!(x_plot ,u_predict,title = "predict")
67+
plot(x_plot, u_real, title = "real")
68+
plot!(x_plot, u_predict, title = "predict")
6969
```
7070

7171
![hodeplot](https://user-images.githubusercontent.com/12683885/90276340-69bc3e00-de6c-11ea-89a7-7d291123a38b.png)

docs/src/examples/heterogeneous.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,29 +19,31 @@ Dx = Differential(x)
1919
Dy = Differential(y)
2020
2121
# 2D PDE
22-
eq = p(x) + q(y) + Dx(r(x, y)) + Dy(s(y, x)) ~ 0
22+
eq = p(x) + q(y) + Dx(r(x, y)) + Dy(s(y, x)) ~ 0
2323
2424
# Initial and boundary conditions
25-
bcs = [p(1) ~ 0.f0, q(-1) ~ 0.0f0,
26-
r(x, -1) ~ 0.f0, r(1, y) ~ 0.0f0,
27-
s(y, 1) ~ 0.0f0, s(-1, x) ~ 0.0f0]
25+
bcs = [p(1) ~ 0.0f0, q(-1) ~ 0.0f0,
26+
r(x, -1) ~ 0.0f0, r(1, y) ~ 0.0f0,
27+
s(y, 1) ~ 0.0f0, s(-1, x) ~ 0.0f0]
2828
2929
# Space and time domains
3030
domains = [x ∈ Interval(0.0, 1.0),
31-
y ∈ Interval(0.0, 1.0)]
31+
y ∈ Interval(0.0, 1.0)]
3232
3333
numhid = 3
34-
chains = [[Lux.Chain(Dense(1, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ), Dense(numhid, 1)) for i in 1:2];
35-
[Lux.Chain(Dense(2, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ), Dense(numhid, 1)) for i in 1:2]]
34+
chains = [[Lux.Chain(Dense(1, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ),
35+
Dense(numhid, 1)) for i in 1:2]
36+
[Lux.Chain(Dense(2, numhid, Lux.σ), Dense(numhid, numhid, Lux.σ),
37+
Dense(numhid, 1)) for i in 1:2]]
3638
discretization = NeuralPDE.PhysicsInformedNN(chains, QuadratureTraining())
3739
38-
@named pde_system = PDESystem(eq, bcs, domains, [x,y], [p(x), q(y), r(x, y), s(y, x)])
40+
@named pde_system = PDESystem(eq, bcs, domains, [x, y], [p(x), q(y), r(x, y), s(y, x)])
3941
prob = SciMLBase.discretize(pde_system, discretization)
4042
41-
callback = function (p,l)
43+
callback = function (p, l)
4244
println("Current loss is: $l")
4345
return false
4446
end
4547
46-
res = Optimization.solve(prob, BFGS(); callback = callback, maxiters=100)
48+
res = Optimization.solve(prob, BFGS(); callback = callback, maxiters = 100)
4749
```

0 commit comments

Comments
 (0)