You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# [](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb)
6
-
#
6
+
#
7
7
# > ##### ⚠️ ***Before starting:***
8
8
# > We assume you are already familiar with the concepts covered in the [Getting started with PINA](https://mathlab.github.io/PINA/_tutorial.html#getting-started-with-pina) tutorials. If not, we strongly recommend reviewing them before exploring this advanced topic.
9
-
#
9
+
#
10
10
11
11
# In this tutorial, we will demonstrate a typical use case of **PINA** for Physics Informed Neural Network (PINN) training. We will cover the basics of training a PINN with PINA, if you want to go further into PINNs look at our dedicated [tutorials](https://mathlab.github.io/PINA/_tutorial.html#physics-informed-neural-networks) on the topic.
12
-
#
12
+
#
13
13
# Let's start by importing the useful modules:
14
14
15
15
# In[ ]:
@@ -43,9 +43,9 @@
43
43
44
44
45
45
# ## Build the problem
46
-
#
46
+
#
47
47
# We will use a simple Ordinary Differential Equation as pedagogical example:
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
106
106
107
107
# In[4]:
@@ -144,18 +144,18 @@ def solution(self, pts):
144
144
# ## Easily solve a Physics Problem with three step pipeline
145
145
146
146
# Once the problem is defined and the data is generated, we can move on to modeling. This process consists of three key steps:
147
-
#
147
+
#
148
148
# **Choosing a Model**
149
149
# - Select a neural network architecture. You can use the model we provide in the `pina.model` module (see [here](https://mathlab.github.io/PINA/_rst/_code.html#models) for a full list), or define a custom PyTorch module (more on this [here](https://pytorch.org/docs/stable/notes/modules.html)).
150
-
#
150
+
#
151
151
# **Choosing a PINN Solver & Defining the Trainer**
152
152
# * Use a Physics Informed solver from `pina.solver` module to solve the problem using the specified model. We have already implemented most State-Of-The-Arte solvers for you, [have a look](https://mathlab.github.io/PINA/_rst/_code.html#solvers) if interested. Today we will use the standard `PINN` solver.
153
-
#
153
+
#
154
154
# **Training**
155
155
# * Train the model with the [`Trainer`](https://mathlab.github.io/PINA/_rst/trainer.html) class. The Trainer class provides powerful features to enhance model accuracy, optimize training time and memory, and simplify logging and visualization, thanks to PyTorch Lightning's excellent work, see [our dedicated tutorial](https://mathlab.github.io/PINA/tutorial11/tutorial.html) for further details. By default, training metrics (e.g., MSE error) are logged using a lightning logger (CSVLogger). If you prefer manual tracking, use `pina.callback.MetricTracker`.
156
-
#
156
+
#
157
157
# Let's cover all steps one by one!
158
-
#
158
+
#
159
159
# First we build the model, in this case a FeedForward neural network, with two layers of size 10 and hyperbolic tangent activation:
160
160
161
161
# In[7]:
@@ -171,7 +171,7 @@ def solution(self, pts):
171
171
172
172
173
173
# Then we build the solver. The Physics-Informed Neural Network (`PINN`) solver class needs to be initialised with a `model` and a specific `problem` to be solved. They also take extra arguments, as the optimizer, scheduler, loss type and weighting for the different conditions which are all set to their defualt values.
174
-
#
174
+
#
175
175
# >##### 💡***Bonus tip:***
176
176
# > All physics solvers in PINA can handle both forward and inverse problems without requiring any changes to the model or solver structure! See [our tutorial](https://mathlab.github.io/PINA/tutorial7/tutorial.html) of inverse problems for more infos.
177
177
@@ -184,10 +184,10 @@ def solution(self, pts):
184
184
185
185
186
186
# Finally, we train the model using the Trainer API. The trainer offers various options to customize your training, refer to the official documentation for details. Here, we highlight the `MetricTracker` from `pina.callback`, which helps track metrics during training. In order to train just call the `.train()` method.
187
-
#
187
+
#
188
188
# > ##### ⚠️ ***Important Note:***
189
189
# > In PINA you can log metrics in different ways. The simplest approach is to use the `MetricTraker` class from `pina.callbacks` as we will see today. However, expecially when we need to train multiple times to get an average of the loss across multiple runs, we suggest to use `lightning.pytorch.loggers` (see [here](https://lightning.ai/docs/pytorch/stable/extensions/logging.html) for reference).
190
-
#
190
+
#
191
191
192
192
# In[9]:
193
193
@@ -218,7 +218,7 @@ def solution(self, pts):
218
218
trainer.logged_metrics
219
219
220
220
221
-
# By using `matplotlib` we can also do some qualitative plots of the solution.
221
+
# By using `matplotlib` we can also do some qualitative plots of the solution.
222
222
223
223
# In[11]:
224
224
@@ -249,17 +249,17 @@ def solution(self, pts):
249
249
250
250
251
251
# ## What's Next?
252
-
#
252
+
#
253
253
# Congratulations on completing the introductory tutorial on Physics-Informed Training! Now that you have a solid foundation, here are several exciting directions you can explore:
254
-
#
254
+
#
255
255
# 1. **Experiment with Training Duration & Network Architecture**: Try different training durations and tweak the network architecture to optimize performance.
256
-
#
256
+
#
257
257
# 2. **Explore Other Models in `pina.model`**: Check out other models available in `pina.model` or design your own custom PyTorch module to suit your needs.
258
-
#
258
+
#
259
259
# 3. **Run Training on a GPU**: Speed up your training by running on a GPU and compare the performance improvements.
260
-
#
260
+
#
261
261
# 4. **Test Various Solvers**: Explore and evaluate different solvers to assess their performance on various types of problems.
262
-
#
262
+
#
263
263
# 5. **... and many more!**: The possibilities are vast! Continue experimenting with advanced configurations, solvers, and other features in PINA.
264
-
#
264
+
#
265
265
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).
Copy file name to clipboardExpand all lines: tutorials/tutorial10/tutorial.py
+42-38Lines changed: 42 additions & 38 deletions
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
# coding: utf-8
3
3
4
4
# # Tutorial: Solving the Kuramoto–Sivashinsky Equation with Averaging Neural Operator
5
-
#
5
+
#
6
6
# [](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial10/tutorial.ipynb)
7
-
#
8
-
#
7
+
#
8
+
#
9
9
# In this tutorial, we will build a Neural Operator using the **`AveragingNeuralOperator`** model and the **`SupervisedSolver`**. By the end of this tutorial, you will be able to train a Neural Operator to learn the operator for time-dependent PDEs.
# In this equation, $x \in \Omega = [0, 64]$ represents a spatial location, and $t \in \mathbb{T} = [0, 50]$ represents time. The function $u(x, t)$ is the value of the function at each point in space and time, with $u(x, t) \in \mathbb{R}$. We denote the solution space as $\mathbb{U}$, where $u \in \mathbb{U}$.
52
-
#
56
+
#
53
57
# We impose Dirichlet boundary conditions on the derivative of $u$ at the boundary of the domain $\partial \Omega$:
# The initial conditions are sampled from a distribution over truncated Fourier series with random coefficients $\{A_k, \ell_k, \phi_k\}_k$, as follows:
# We have already generated data for different initial conditions. The goal is to build a Neural Operator that, given $u(x,t)$, outputs $u(x,t+\delta)$, where $\delta$ is a fixed time step.
71
-
#
73
+
#
74
+
# We have already generated data for different initial conditions. The goal is to build a Neural Operator that, given $u(x,t)$, outputs $u(x,t+\delta)$, where $\delta$ is a fixed time step.
75
+
#
72
76
# We will cover the Neural Operator architecture later, but for now, let’s start by importing the data.
73
-
#
77
+
#
74
78
# **Note:**
75
79
# The numerical integration is obtained using a pseudospectral method for spatial derivative discretization and implicit Runge-Kutta 5 for temporal dynamics.
76
80
@@ -102,7 +106,7 @@
102
106
# - `B` is the batch size (i.e., how many initial conditions we sample),
103
107
# - `N` is the number of points in the mesh (which is the product of the discretization in $x$ times the one in $t$),
104
108
# - `D` is the dimension of the problem (in this case, we have three variables: $[u, t, x]$).
# As we can see, as time progresses, the solution becomes chaotic, making it very difficult to learn! We will now focus on building a Neural Operator using the `SupervisedSolver` class to tackle this problem.
169
-
#
173
+
#
170
174
# ## Averaging Neural Operator
171
-
#
175
+
#
172
176
# We will build a neural operator $\texttt{NO}$, which takes the solution at time $t=0$ for any $x\in\Omega$, the time $t$ at which we want to compute the solution, and gives back the solution to the KS equation $u(x, t)$. Mathematically:
# There are many ways to approximate the following operator, for example, by using a 2D [FNO](https://mathlab.github.io/PINA/_rst/model/fourier_neural_operator.html) (for regular meshes), a [DeepOnet](https://mathlab.github.io/PINA/_rst/model/deeponet.html), [Continuous Convolutional Neural Operator](https://mathlab.github.io/PINA/_rst/model/block/convolution.html), or [MIONet](https://mathlab.github.io/PINA/_rst/model/mionet.html). In this tutorial, we will use the *Averaging Neural Operator* presented in [*The Nonlocal Neural Operator: Universal Approximation*](https://arxiv.org/abs/2304.13221), which is a [Kernel Neural Operator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) with an integral kernel:
185
-
#
189
+
#
186
190
# $$
187
191
# K(v) = \sigma\left(Wv(x) + b + \frac{1}{|\Omega|}\int_\Omega v(y)dy\right)
188
192
# $$
189
-
#
193
+
#
190
194
# where:
191
-
#
195
+
#
192
196
# * $v(x) \in \mathbb{R}^{\rm{emb}}$ is the update for a function $v$, with $\mathbb{R}^{\rm{emb}}$ being the embedding (hidden) size.
193
197
# * $\sigma$ is a non-linear activation function.
194
198
# * $W \in \mathbb{R}^{\rm{emb} \times \rm{emb}}$ is a tunable matrix.
195
199
# * $b \in \mathbb{R}^{\rm{emb}}$ is a tunable bias.
196
-
#
200
+
#
197
201
# In PINA, many Kernel Neural Operators are already implemented. The modular components of the [Kernel Neural Operator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) class allow you to create new ones by composing base kernel layers.
198
-
#
202
+
#
199
203
# **Note:** We will use the already built class `AveragingNeuralOperator`. As a constructive exercise, try to use the [KernelNeuralOperator](https://mathlab.github.io/PINA/_rst/model/kernel_neural_operator.html) class to build a kernel neural operator from scratch. You might employ the different layers that we have in PINA, such as [FeedForward](https://mathlab.github.io/PINA/_rst/model/feed_forward.html) and [AveragingNeuralOperator](https://mathlab.github.io/PINA/_rst/model/average_neural_operator.html) layers.
200
204
201
205
# In[5]:
@@ -222,9 +226,9 @@ def forward(self, x):
222
226
223
227
224
228
# Super easy! Notice that we use the `SIREN` activation function, which is discussed in more detail in the paper [Implicit Neural Representations with Periodic Activation Functions](https://arxiv.org/abs/2006.09661).
225
-
#
229
+
#
226
230
# ## Solving the KS problem
227
-
#
231
+
#
228
232
# We will now focus on solving the KS equation using the `SupervisedSolver` class and the `AveragingNeuralOperator` model. As done in the [FNO tutorial](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial5/tutorial.ipynb), we now create the Neural Operator problem class with `SupervisedProblem`.
229
233
230
234
# In[6]:
@@ -267,7 +271,7 @@ def forward(self, x):
267
271
)
268
272
269
273
270
-
# As we can see, we can obtain nice results considering the small training time and the difficulty of the problem!
274
+
# As we can see, we can obtain nice results considering the small training time and the difficulty of the problem!
271
275
# Let's take a look at the training and testing error:
272
276
273
277
# In[8]:
@@ -293,13 +297,13 @@ def forward(self, x):
293
297
# As we can see, the error is pretty small, which aligns with the observations from the previous plots.
294
298
295
299
# ## What's Next?
296
-
#
300
+
#
297
301
# You have completed the tutorial on solving time-dependent PDEs using Neural Operators in **PINA**. Great job! Here are some potential next steps you can explore:
298
-
#
302
+
#
299
303
# 1. **Train the network for longer or with different layer sizes**: Experiment with various configurations, such as adjusting the number of layers or hidden dimensions, to further improve accuracy and observe the impact on performance.
300
-
#
304
+
#
301
305
# 2. **Use a more challenging dataset**: Try using the more complex dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, and $\phi_k \in [0, 2\pi]$ for a more difficult task. This dataset may require longer training and testing.
302
-
#
306
+
#
303
307
# 3. **... and many more...**: Explore other models, such as the [FNO](https://mathlab.github.io/PINA/_rst/models/fno.html), [DeepOnet](https://mathlab.github.io/PINA/_rst/models/deeponet.html), or implement your own operator using the [KernelNeuralOperator](https://mathlab.github.io/PINA/_rst/models/base_no.html) class to compare performance and find the best model for your task.
304
-
#
308
+
#
305
309
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).
0 commit comments