You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: paper/paper.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ $$
81
81
where $q$ is given, $x$ and $s$ are fixed shifts, $\chi(\cdot \mid \Delta \mathbb{B})$ is the indicator of a ball of radius $\Delta > 0$ defined by a certain norm, and $\psi(\cdot; x)$ is a model of $h$ about $x$.
82
82
It is common to set $\psi(t + s; x) = h(x + s + t)$.
83
83
84
-
These shifted operators allow us to (i) incorporate bound or trust-region constraints via the indicator, which is required for the **TR** and **TRDH** solvers, and (ii) evaluate the above **in place**, without additional allocations, which is currently not possible with ProximalOperators.jl.
84
+
These shifted operators allow to (i) incorporate bound or trust-region constraints via the indicator, which is required for the **TR** and **TRDH** solvers, and (ii) evaluate the above in place, without additional allocations, which is currently not possible with ProximalOperators.jl.
85
85
86
86
RegularizedOptimization.jl provides a consistent API to formulate optimization problems and apply different solvers.
87
87
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers)[@jso] ecosystem, an academic organization for nonlinear optimization software development, testing, and benchmarking.
RegularizedProblems.jl also provides a set of instances commonly used in data science and in the nonsmooth optimization literature, where several choices of $f$ can be paired with various nonsmooth terms $h$.
107
107
This design makes for a convenient source of reproducible problem instances for testing and benchmarking the solvers in [RegularizedOptimization.jl](https://www.github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
108
108
109
-
## Support for exact or approximate second derivatives
109
+
## Support for both exact and approximate Hessian
110
110
111
111
In contrast with [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl), methods such as **R2N** and **TR** methods support exact Hessians as well as several Hessian approximations of $f$.
112
112
Hessian–vector products $v \mapsto Hv$ can be obtained via automatic differentiation through [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl) or implemented manually.
@@ -175,13 +175,14 @@ The subproblem solver is **R2**.
175
175
176
176
\input{examples/Benchmark.tex}
177
177
178
-
-For the LM solver, gradient evaluations count $\#\nabla f$ equals the number of Jacobian–vector and adjoint-Jacobian–vector products.
178
+
-Note that for the LM solver, gradient evaluations count $\#\nabla f$ equals the number of Jacobian–vector and adjoint-Jacobian–vector products.
179
179
180
180
All methods successfully reduced the optimality measure below the specified tolerance of $10^{-4}$, and thus converged to an approximate first-order stationary point.
181
-
However, the final objective values differ due to the nonconvexity of the problems.
181
+
Note that, the final objective values differ due to the nonconvexity of the problems.
182
182
183
183
-**SVM with $\ell^{1/2}$ penalty:****R2N** is the fastest, requiring the fewest function and gradient evaluations compared to **TR**.
184
184
However, it requires more proximal evaluations, but these are inexpensive.
185
+
**LM** requires the fewest function evaluations, but many gradient evaluations, and is the slowest.
185
186
-**NNMF with constrained $\ell_0$ penalty:****TR** is the fastest, and requires a fewer number of function and gradient evaluations than **R2N**. **LM** is competitive in terms of function calls but incurs many Jacobian–vector products; it nevertheless achieves the lowest objective value.
186
187
187
188
Additional tests (e.g., other regularizers, constraint types, and scaling dimensions) have also been conducted, and a full benchmarking campaign is currently underway.
@@ -202,6 +203,6 @@ In ongoing research, the package will be extended with algorithms that enable to
202
203
203
204
The authors would like to thank Alberto Demarchi for his implementation of the Augmented Lagrangian solver.
204
205
Mohamed Laghdaf Habiboullah is supported by an excellence FRQNT grant.
205
-
Youssef Diouane and Dominique Orban are partially supported by an NSERC Discovery Grant.
206
+
Youssef Diouane, Maxence Gollier and Dominique Orban are partially supported by an NSERC Discovery Grant.
0 commit comments