You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These methods rely solely on the gradient and Hessian(-vector) information of the smooth part $f$ and the proximal mapping of the nonsmooth part $h$ in order to compute steps.
52
49
Then, the objective function $f + h$ is used only to accept or reject trial points.
53
-
Moreover, they can handle cases where Hessian approximations are unbounded [@diouane-habiboullah-orban-2024] and [@leconte-orban-2023-2], making the package particularly suited for large-scale, ill-conditioned, and nonsmooth problems.
50
+
Moreover, they can handle cases where Hessian approximations are unbounded [@diouane-habiboullah-orban-2024;@leconte-orban-2023-2], making the package particularly suited for large-scale, ill-conditioned, and nonsmooth problems.
54
51
55
52
# Statement of need
56
53
57
54
## Model-based framework for nonsmooth methods
58
55
59
56
There exists a way to solve \eqref{eq:nlp} in Julia using [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), which implements in-place first-order line search–based methods for \eqref{eq:nlp}.
60
57
Most of these methods are generally splitting schemes that alternate between taking steps along the gradient of the smooth part $f$ (or quasi-Newton directions) and applying proximal steps on the nonsmooth part $h$.
61
-
Currently, **ProximalAlgorithms.jl** provides only L-BFGS as a quasi-Newton option.
62
-
By contrast, **RegularizedOptimization.jl** focuses on model-based approaches such as trust-region and regularization algorithms.
58
+
Currently, [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) provides only L-BFGS as a quasi-Newton option.
59
+
By contrast, [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) focuses on model-based approaches such as trust-region and regularization algorithms.
63
60
As shown in [@aravkin-baraldi-orban-2022], model-based methods typically require fewer evaluations of the objective and its gradient than first-order line search methods, at the expense of solving more involved subproblems.
64
61
Although these subproblems may require many proximal iterations, each proximal computation is inexpensive, making the overall approach efficient for large-scale problems.
65
62
66
-
Building on this perspective, **RegularizedOptimization.jl** implements state-of-the-art regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
63
+
Building on this perspective, [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) implements state-of-the-art regularization-based algorithms for solving problems of the form $f(x) + h(x)$, where $f$ is smooth and $h$ is nonsmooth.
67
64
The package provides a consistent API to formulate optimization problems and apply different regularization methods.
68
65
It integrates seamlessly with the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) ecosystem, an academic organization for nonlinear optimization software development, testing, and benchmarking.
69
66
@@ -79,9 +76,20 @@ This modularity makes it easy to benchmark existing solvers available in the rep
79
76
80
77
## Support for Hessians
81
78
82
-
In contrast to first-order methods package like [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), **RegularizedOptimization.jl** enables the use of second-order information, which can significantly improve convergence rates, especially for ill-conditioned problems.
79
+
In contrast to first-order methods package like [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl), [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) enables the use of second-order information, which can significantly improve convergence rates, especially for ill-conditioned problems.
83
80
A way to use Hessians is via automatic differentiation tools such as [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl).
84
81
82
+
## Requirements of the RegularizedProblems.jl package
83
+
84
+
To model the problem \eqref{eq:nlp}, one defines the smooth part $f$ and the nonsmooth part $h$ as discussed above.
85
+
The package [RegularizedProblems.jl](https://github.com/JuliaSmoothOptimizers/RegularizedProblems.jl) provides a straightforward way to create such instances, called *Regularized Nonlinear Programming Models*:
86
+
87
+
```julia
88
+
reg_nlp =RegularizedNLPModel(f, h)
89
+
```
90
+
91
+
This design makes it a convenient source of reproducible problem instances for testing and benchmarking algorithms in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl).
92
+
85
93
## Requirements of the ShiftedProximalOperators.jl package
86
94
87
95
The nonsmooth part $h$ must have a computable proximal mapping, defined as
@@ -95,9 +103,9 @@ Specifically, this package considers proximal operators defined as
where q is given, x and s are fixed shifts, h is the nonsmooth term with respect
99
-
to which we are computing the proximal operator, and χ(.; ΔB) is the indicator of
100
-
a ball of radius Δ defined by a certain norm.
106
+
where $q$ is given, $x$ and $s$ are fixed shifts, $h$ is the nonsmooth term with respect
107
+
to which we are computing the proximal operator, and $χ(.; \Delta B)$ is the indicator of
108
+
a ball of radius $\Delta$ defined by a certain norm.
101
109
102
110
{ width=70% }
103
111
@@ -111,11 +119,11 @@ Documentation is built using Documenter.jl.
111
119
112
120
## Hyperparameter tuning
113
121
114
-
The solvers in **RegularizedOptimization.jl** do not require extensive hyperparameter tuning.
122
+
The solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) do not require extensive hyperparameter tuning.
115
123
116
124
## Non-monotone strategies
117
125
118
-
The solvers in **RegularizedOptimization.jl** implement non-monotone strategies to accept trial points, which can enhance convergence properties.
126
+
The solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) implement non-monotone strategies to accept trial points, which can enhance convergence properties.
119
127
120
128
## Application studies
121
129
@@ -124,19 +132,19 @@ This is not covered in the current version of the competitive package [ProximalA
124
132
125
133
## Support for inexact subproblem solves
126
134
127
-
Solvers in **RegularizedOptimization.jl** allow inexact resolution of trust-region and quadratic-regularized subproblems using first-order that are implemented in the package itself such as the quadratic regularization method R2[@aravkin-baraldi-orban-2022] and R2DH[@diouane-habiboullah-orban-2024] with trust-region variants TRDH[@leconte-orban-2023-2].
135
+
Solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) allow inexact resolution of trust-region and quadratic-regularized subproblems using first-order that are implemented in the package itself such as the quadratic regularization method R2[@aravkin-baraldi-orban-2022] and R2DH[@diouane-habiboullah-orban-2024] with trust-region variants TRDH[@leconte-orban-2023-2].
128
136
129
137
This is crucial for large-scale problems where exact subproblem solutions are prohibitive.
130
138
131
139
## Support for Hessians as Linear Operators
132
140
133
-
The second-order methods in **RegularizedOptimization.jl** can use Hessian approximations represented as linear operators via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
141
+
The second-order methods in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) can use Hessian approximations represented as linear operators via [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
134
142
Explicitly forming Hessians as dense or sparse matrices is often prohibitively expensive, both computationally and in terms of memory, especially in high-dimensional settings.
135
143
In contrast, many problems admit efficient implementations of Hessian–vector or Jacobian–vector products, either through automatic differentiation tools or limited-memory quasi-Newton updates, making the linear-operator approach more scalable and practical.
136
144
137
145
## In-place methods
138
146
139
-
All solvers in **RegularizedOptimization.jl** are implemented in an in-place fashion, minimizing memory allocations during the resolution process.
147
+
All solvers in [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl) are implemented in an in-place fashion, minimizing memory allocations during the resolution process.
140
148
141
149
# Examples
142
150
@@ -152,13 +160,14 @@ using DifferentialEquations, ADNLPModels
0 commit comments