You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the matrix $A$ does not change during this process, the LU decomposition remains valid
205
+
throughout the process, so that this iterative refinement can be done at a reasonably low cost.
206
+
207
+
Given the original matrix equation $A \cdot x = b$ to solve, the pivot perturbated matrix
208
+
$\tilde{A}$ with a pre-calculated LU-decomposition, and the convergence threshold $\epsilon$,
209
+
the algorithm is as follows:
210
+
211
+
1. Initialize:
212
+
1. Set the initial estimate: $x_{\text{est}} = 0$.
213
+
2. Set the initial residual: $r \gets b$.
214
+
3. Set the initial backward error: $\text{backward_error} = \infty$.
215
+
4. Set the number of iterations to 0.
216
+
2. Iteratively refine; loop:
217
+
1. Check stop criteria:
218
+
1. If $\text{backward_error} \leq \epsilon$, then:
219
+
1. Convergence reached: stop the refinement process.
220
+
2. Else, if the number of iterations > maximum allowed amount of iterations, then:
221
+
1. Convergence not reached; iterative refinement not possible: raise a sparse matrix
222
+
error.
223
+
3. Else:
224
+
1. Increase the number of iterations.
225
+
2. Proceed.
226
+
2. Solve $\tilde{A} \cdot \Delta x = r$ for $\Delta x$.
227
+
3. Calculate the backward error with the original $x$ and $r$ using the [backward error formula](#improved-backward-error-calculation).
228
+
4. Set the next estimation of $x$: $x_{\text{est}} \gets x_{\text{est}} + \Delta x$.
229
+
5. Set the residual: $r \gets b - A \cdot x$.
230
+
231
+
Because the backward error is calculated on the $x$ and $r$ from the previous iteration, the
232
+
iterative refinement loop will always be executed twice.
233
+
234
+
The reason a sparse matrix error is raised and not an iteration diverge error, is that it is the
235
+
iterative refinement of the matrix equation solution that cannot be solved in the set amount of
236
+
iterations - not the set of power system equations. This will only happen when the matrix
237
+
equation requires iterative refinement in the first place, which happens only when pivot
238
+
perturbation is needed, namely in the case of an ill-conditioned matrix equation.
184
239
185
240
#### Differences with literature
186
241
@@ -196,7 +251,7 @@ They are summarized below.
196
251
contains an early-out criterion for the iterative refinement that checks for deminishing returns in
197
252
consecutive iterations. It amounts to (in reverse order):
198
253
199
-
1. If $$\text{backward_error} \gt \frac{1}{2}\text{last_backward_error}$$, then:
254
+
1. If $\text{backward_error} \gt \frac{1}{2}\text{last_backward_error}$, then:
200
255
1. Stop iterative refinement.
201
256
2. Else:
202
257
1. Go to next refinement iteration.
@@ -205,28 +260,79 @@ In power systems, however, the fact that the matrix may contain elements
205
260
[spanning several orders of magnitude](#element-size-properties-of-power-system-equations) may cause
206
261
slow convergence far away from the optimum. The diminishing return criterion would cause the
207
262
algorithm to exit before the actual solution is found. Multiple refinement iterations may still
208
-
yield better results. The power grid model therefore does not stop on deminishing returns. Instead,
209
-
a maximum amount of iterations is used in combination with the error tolerance.
263
+
yield better results. The power grid model therefore does not stop on deminishing returns. Instead, a maximum amount of iterations is used in combination with the error tolerance.
264
+
265
+
##### Improved backward error calculation
266
+
267
+
In power system equations, the matrix equation $A x = b$ can be very unbalanced: some entries
268
+
in the matrix $A$ may be very large while others are zero or very small. The same may be true for
269
+
the right-hand side of the equation $b$, as well as its solution $x$. In fact, there may be
270
+
certain rows $i$ for which both $\left|b[i]\right|$ and
271
+
$\sum_j \left|A[i,j]\right| \left|x[j]\right|$ are small and, therefore, their sum is prone to
272
+
rounding errors, which may be several orders larger than machine precision.
0 commit comments