Skip to content

Commit e3d4c2f

Browse files
committed
export tutorials changed in db9df8b
1 parent a94791f commit e3d4c2f

File tree

23 files changed

+737
-727
lines changed

23 files changed

+737
-727
lines changed

docs/source/tutorials/tutorial21/tutorial.html

Lines changed: 12 additions & 12 deletions
Large diffs are not rendered by default.

docs/source/tutorials/tutorial8/tutorial.html

Lines changed: 11 additions & 11 deletions
Large diffs are not rendered by default.

tutorials/tutorial1/tutorial.py

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
#!/usr/bin/env python
22
# coding: utf-8
33

4-
# # Tutorial: Introductory Tutorial: Physics Informed Neural Networks with PINA
4+
# # Tutorial: Introductory Tutorial: Physics Informed Neural Networks with PINA
55
# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb)
6-
#
6+
#
77
# > ##### ⚠️ ***Before starting:***
88
# > We assume you are already familiar with the concepts covered in the [Getting started with PINA](https://mathlab.github.io/PINA/_tutorial.html#getting-started-with-pina) tutorials. If not, we strongly recommend reviewing them before exploring this advanced topic.
9-
#
9+
#
1010

1111
# In this tutorial, we will demonstrate a typical use case of **PINA** for Physics Informed Neural Network (PINN) training. We will cover the basics of training a PINN with PINA, if you want to go further into PINNs look at our dedicated [tutorials](https://mathlab.github.io/PINA/_tutorial.html#physics-informed-neural-networks) on the topic.
12-
#
12+
#
1313
# Let's start by importing the useful modules:
1414

1515
# In[ ]:
@@ -43,9 +43,9 @@
4343

4444

4545
# ## Build the problem
46-
#
46+
#
4747
# We will use a simple Ordinary Differential Equation as pedagogical example:
48-
#
48+
#
4949
# $$
5050
# \begin{equation}
5151
# \begin{cases}
@@ -54,9 +54,9 @@
5454
# \end{cases}
5555
# \end{equation}
5656
# $$
57-
#
58-
# with the analytical solution $u(x) = e^x$.
59-
#
57+
#
58+
# with the analytical solution $u(x) = e^x$.
59+
#
6060
# The PINA problem is easly written as:
6161

6262
# In[2]:
@@ -100,8 +100,8 @@ def solution(self, pts):
100100
problem.discretise_domain(20, "lh", domains=["D"])
101101

102102

103-
# ## Generate data
104-
#
103+
# ## Generate data
104+
#
105105
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
106106

107107
# In[4]:
@@ -144,18 +144,18 @@ def solution(self, pts):
144144
# ## Easily solve a Physics Problem with three step pipeline
145145

146146
# Once the problem is defined and the data is generated, we can move on to modeling. This process consists of three key steps:
147-
#
147+
#
148148
# **Choosing a Model**
149149
# - Select a neural network architecture. You can use the model we provide in the `pina.model` module (see [here](https://mathlab.github.io/PINA/_rst/_code.html#models) for a full list), or define a custom PyTorch module (more on this [here](https://pytorch.org/docs/stable/notes/modules.html)).
150-
#
150+
#
151151
# **Choosing a PINN Solver & Defining the Trainer**
152152
# * Use a Physics Informed solver from `pina.solver` module to solve the problem using the specified model. We have already implemented most State-Of-The-Arte solvers for you, [have a look](https://mathlab.github.io/PINA/_rst/_code.html#solvers) if interested. Today we will use the standard `PINN` solver.
153-
#
153+
#
154154
# **Training**
155155
# * Train the model with the [`Trainer`](https://mathlab.github.io/PINA/_rst/trainer.html) class. The Trainer class provides powerful features to enhance model accuracy, optimize training time and memory, and simplify logging and visualization, thanks to PyTorch Lightning's excellent work, see [our dedicated tutorial](https://mathlab.github.io/PINA/tutorial11/tutorial.html) for further details. By default, training metrics (e.g., MSE error) are logged using a lightning logger (CSVLogger). If you prefer manual tracking, use `pina.callback.MetricTracker`.
156-
#
156+
#
157157
# Let's cover all steps one by one!
158-
#
158+
#
159159
# First we build the model, in this case a FeedForward neural network, with two layers of size 10 and hyperbolic tangent activation:
160160

161161
# In[7]:
@@ -171,7 +171,7 @@ def solution(self, pts):
171171

172172

173173
# Then we build the solver. The Physics-Informed Neural Network (`PINN`) solver class needs to be initialised with a `model` and a specific `problem` to be solved. They also take extra arguments, as the optimizer, scheduler, loss type and weighting for the different conditions which are all set to their defualt values.
174-
#
174+
#
175175
# >##### 💡***Bonus tip:***
176176
# > All physics solvers in PINA can handle both forward and inverse problems without requiring any changes to the model or solver structure! See [our tutorial](https://mathlab.github.io/PINA/tutorial7/tutorial.html) of inverse problems for more infos.
177177

@@ -184,10 +184,10 @@ def solution(self, pts):
184184

185185

186186
# Finally, we train the model using the Trainer API. The trainer offers various options to customize your training, refer to the official documentation for details. Here, we highlight the `MetricTracker` from `pina.callback`, which helps track metrics during training. In order to train just call the `.train()` method.
187-
#
187+
#
188188
# > ##### ⚠️ ***Important Note:***
189189
# > In PINA you can log metrics in different ways. The simplest approach is to use the `MetricTraker` class from `pina.callbacks` as we will see today. However, expecially when we need to train multiple times to get an average of the loss across multiple runs, we suggest to use `lightning.pytorch.loggers` (see [here](https://lightning.ai/docs/pytorch/stable/extensions/logging.html) for reference).
190-
#
190+
#
191191

192192
# In[9]:
193193

@@ -218,7 +218,7 @@ def solution(self, pts):
218218
trainer.logged_metrics
219219

220220

221-
# By using `matplotlib` we can also do some qualitative plots of the solution.
221+
# By using `matplotlib` we can also do some qualitative plots of the solution.
222222

223223
# In[11]:
224224

@@ -249,17 +249,17 @@ def solution(self, pts):
249249

250250

251251
# ## What's Next?
252-
#
252+
#
253253
# Congratulations on completing the introductory tutorial on Physics-Informed Training! Now that you have a solid foundation, here are several exciting directions you can explore:
254-
#
254+
#
255255
# 1. **Experiment with Training Duration & Network Architecture**: Try different training durations and tweak the network architecture to optimize performance.
256-
#
256+
#
257257
# 2. **Explore Other Models in `pina.model`**: Check out other models available in `pina.model` or design your own custom PyTorch module to suit your needs.
258-
#
258+
#
259259
# 3. **Run Training on a GPU**: Speed up your training by running on a GPU and compare the performance improvements.
260-
#
260+
#
261261
# 4. **Test Various Solvers**: Explore and evaluate different solvers to assess their performance on various types of problems.
262-
#
262+
#
263263
# 5. **... and many more!**: The possibilities are vast! Continue experimenting with advanced configurations, solvers, and other features in PINA.
264-
#
264+
#
265265
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).

tutorials/tutorial10/tutorial.py

Lines changed: 42 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
# coding: utf-8
33

44
# # Tutorial: Solving the Kuramoto–Sivashinsky Equation with Averaging Neural Operator
5-
#
5+
#
66
# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial10/tutorial.ipynb)
7-
#
8-
#
7+
#
8+
#
99
# In this tutorial, we will build a Neural Operator using the **`AveragingNeuralOperator`** model and the **`SupervisedSolver`**. By the end of this tutorial, you will be able to train a Neural Operator to learn the operator for time-dependent PDEs.
10-
#
10+
#
1111
# Let's start by importing the necessary modules.
1212

1313
# In[ ]:
@@ -24,8 +24,12 @@
2424
get_ipython().system('pip install "pina-mathlab[tutorial]"')
2525
# get the data
2626
get_ipython().system('mkdir "data"')
27-
get_ipython().system('wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS.mat" -O "data/Data_KS.mat"')
28-
get_ipython().system('wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS2.mat" -O "data/Data_KS2.mat"')
27+
get_ipython().system(
28+
'wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS.mat" -O "data/Data_KS.mat"'
29+
)
30+
get_ipython().system(
31+
'wget "https://github.com/mathLab/PINA/raw/refs/heads/master/tutorials/tutorial10/data/Data_KS2.mat" -O "data/Data_KS2.mat"'
32+
)
2933

3034
import torch
3135
import matplotlib.pyplot as plt
@@ -41,36 +45,36 @@
4145

4246

4347
# ## Data Generation
44-
#
48+
#
4549
# In this tutorial, we will focus on solving the **Kuramoto-Sivashinsky (KS)** equation, a fourth-order nonlinear PDE. The equation is given by:
46-
#
50+
#
4751
# $$
4852
# \frac{\partial u}{\partial t}(x,t) = -u(x,t)\frac{\partial u}{\partial x}(x,t) - \frac{\partial^{4}u}{\partial x^{4}}(x,t) - \frac{\partial^{2}u}{\partial x^{2}}(x,t).
4953
# $$
50-
#
54+
#
5155
# In this equation, $x \in \Omega = [0, 64]$ represents a spatial location, and $t \in \mathbb{T} = [0, 50]$ represents time. The function $u(x, t)$ is the value of the function at each point in space and time, with $u(x, t) \in \mathbb{R}$. We denote the solution space as $\mathbb{U}$, where $u \in \mathbb{U}$.
52-
#
56+
#
5357
# We impose Dirichlet boundary conditions on the derivative of $u$ at the boundary of the domain $\partial \Omega$:
54-
#
58+
#
5559
# $$
5660
# \frac{\partial u}{\partial x}(x,t) = 0 \quad \forall (x,t) \in \partial \Omega \times \mathbb{T}.
5761
# $$
58-
#
62+
#
5963
# The initial conditions are sampled from a distribution over truncated Fourier series with random coefficients $\{A_k, \ell_k, \phi_k\}_k$, as follows:
60-
#
64+
#
6165
# $$
6266
# u(x,0) = \sum_{k=1}^N A_k \sin\left(2 \pi \frac{\ell_k x}{L} + \phi_k\right),
6367
# $$
64-
#
68+
#
6569
# where:
6670
# - $A_k \in [-0.4, -0.3]$,
6771
# - $\ell_k = 2$,
6872
# - $\phi_k = 2\pi \quad \forall k=1,\dots,N$.
69-
#
70-
# We have already generated data for different initial conditions. The goal is to build a Neural Operator that, given $u(x,t)$, outputs $u(x,t+\delta)$, where $\delta$ is a fixed time step.
71-
#
73+
#
74+
# We have already generated data for different initial conditions. The goal is to build a Neural Operator that, given $u(x,t)$, outputs $u(x,t+\delta)$, where $\delta$ is a fixed time step.
75+
#
7276
# We will cover the Neural Operator architecture later, but for now, let’s start by importing the data.
73-
#
77+
#
7478
# **Note:**
7579
# The numerical integration is obtained using a pseudospectral method for spatial derivative discretization and implicit Runge-Kutta 5 for temporal dynamics.
7680

@@ -102,7 +106,7 @@
102106
# - `B` is the batch size (i.e., how many initial conditions we sample),
103107
# - `N` is the number of points in the mesh (which is the product of the discretization in $x$ times the one in $t$),
104108
# - `D` is the dimension of the problem (in this case, we have three variables: $[u, t, x]$).
105-
#
109+
#
106110
# We are now going to plot some trajectories!
107111

108112
# In[4]:
@@ -166,36 +170,36 @@ def plot_trajectory(coords, real, no_sol=None):
166170

167171

168172
# As we can see, as time progresses, the solution becomes chaotic, making it very difficult to learn! We will now focus on building a Neural Operator using the `SupervisedSolver` class to tackle this problem.
169-
#
173+
#
170174
# ## Averaging Neural Operator
171-
#
175+
#
172176
# We will build a neural operator $\texttt{NO}$, which takes the solution at time $t=0$ for any $x\in\Omega$, the time $t$ at which we want to compute the solution, and gives back the solution to the KS equation $u(x, t)$. Mathematically:
173-
#
177+
#
174178
# $$
175179
# \texttt{NO}_\theta : \mathbb{U} \rightarrow \mathbb{U},
176180
# $$
177-
#
181+
#
178182
# such that
179-
#
183+
#
180184
# $$
181185
# \texttt{NO}_\theta[u(t=0)](x, t) \rightarrow u(x, t).
182186
# $$
183-
#
187+
#
184188
# There are many ways to approximate the following operator, for example, by using a 2D [FNO](https://mathlab.github.io/PINA/_rst/model/fourier_neural_operator.html) (for regular meshes), a [DeepOnet](https://mathlab.github.io/PINA/_rst/model/deeponet.html), [Continuous Convolutional Neural Operator](https://mathlab.github.io/PINA/_rst/model/block/convolution.html), or [MIONet](https://mathlab.github.io/PINA/_rst/model/mionet.html). In this tutorial, we will use the *Averaging Neural Operator* presented in [*The Nonlocal Neural Operator: Universal Approximation*](https://arxiv.org/abs/2304.13221), which is a [Kernel Neural Operator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) with an integral kernel:
185-
#
189+
#
186190
# $$
187191
# K(v) = \sigma\left(Wv(x) + b + \frac{1}{|\Omega|}\int_\Omega v(y)dy\right)
188192
# $$
189-
#
193+
#
190194
# where:
191-
#
195+
#
192196
# * $v(x) \in \mathbb{R}^{\rm{emb}}$ is the update for a function $v$, with $\mathbb{R}^{\rm{emb}}$ being the embedding (hidden) size.
193197
# * $\sigma$ is a non-linear activation function.
194198
# * $W \in \mathbb{R}^{\rm{emb} \times \rm{emb}}$ is a tunable matrix.
195199
# * $b \in \mathbb{R}^{\rm{emb}}$ is a tunable bias.
196-
#
200+
#
197201
# In PINA, many Kernel Neural Operators are already implemented. The modular components of the [Kernel Neural Operator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) class allow you to create new ones by composing base kernel layers.
198-
#
202+
#
199203
# **Note:** We will use the already built class `AveragingNeuralOperator`. As a constructive exercise, try to use the [KernelNeuralOperator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) class to build a kernel neural operator from scratch. You might employ the different layers that we have in PINA, such as [FeedForward](https://mathlab.github.io/PINA/_rst/model/feed_forward.html) and [AveragingNeuralOperator](https://mathlab.github.io/PINA/_rst/model/average_neural_operator.html) layers.
200204

201205
# In[5]:
@@ -222,9 +226,9 @@ def forward(self, x):
222226

223227

224228
# Super easy! Notice that we use the `SIREN` activation function, which is discussed in more detail in the paper [Implicit Neural Representations with Periodic Activation Functions](https://arxiv.org/abs/2006.09661).
225-
#
229+
#
226230
# ## Solving the KS problem
227-
#
231+
#
228232
# We will now focus on solving the KS equation using the `SupervisedSolver` class and the `AveragingNeuralOperator` model. As done in the [FNO tutorial](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial5/tutorial.ipynb), we now create the Neural Operator problem class with `SupervisedProblem`.
229233

230234
# In[6]:
@@ -267,7 +271,7 @@ def forward(self, x):
267271
)
268272

269273

270-
# As we can see, we can obtain nice results considering the small training time and the difficulty of the problem!
274+
# As we can see, we can obtain nice results considering the small training time and the difficulty of the problem!
271275
# Let's take a look at the training and testing error:
272276

273277
# In[8]:
@@ -293,13 +297,13 @@ def forward(self, x):
293297
# As we can see, the error is pretty small, which aligns with the observations from the previous plots.
294298

295299
# ## What's Next?
296-
#
300+
#
297301
# You have completed the tutorial on solving time-dependent PDEs using Neural Operators in **PINA**. Great job! Here are some potential next steps you can explore:
298-
#
302+
#
299303
# 1. **Train the network for longer or with different layer sizes**: Experiment with various configurations, such as adjusting the number of layers or hidden dimensions, to further improve accuracy and observe the impact on performance.
300-
#
304+
#
301305
# 2. **Use a more challenging dataset**: Try using the more complex dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, and $\phi_k \in [0, 2\pi]$ for a more difficult task. This dataset may require longer training and testing.
302-
#
306+
#
303307
# 3. **... and many more...**: Explore other models, such as the [FNO](https://mathlab.github.io/PINA/_rst/models/fno.html), [DeepOnet](https://mathlab.github.io/PINA/_rst/models/deeponet.html), or implement your own operator using the [KernelNeuralOperator](https://mathlab.github.io/PINA/_rst/models/base_no.html) class to compare performance and find the best model for your task.
304-
#
308+
#
305309
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).

0 commit comments

Comments
 (0)