Skip to content

Add dense and sparse wrappers for ParOpt from ParOpt directly #414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Jul 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
8f9fb1f
fixed the convertJacobian call when jacType == "csr" so that it retur…
gjkennedy Dec 7, 2023
57f3a49
added inform values to the ParOpt wrapper
gjkennedy Dec 7, 2023
339d8cc
simplified ParOpt wrapper
gjkennedy Dec 7, 2023
8c5a27f
updated wrapper
gjkennedy Dec 7, 2023
d82a685
Merge branch 'main' into paropt-wrapper
gjkennedy Oct 2, 2024
e754550
Import error
A-CGray Oct 2, 2024
2fbabb8
Merge branch 'main' into paropt-wrapper
A-CGray Jan 2, 2025
0c4e334
Add ParOpt to `test_large_sparse`
A-CGray Feb 20, 2025
7710989
Run tp109 test on more optimizers
A-CGray Feb 20, 2025
be96f60
`isort .`
A-CGray Feb 20, 2025
2a5225e
flake8 fixes
A-CGray Feb 20, 2025
dd2b497
Merge branch 'main' into paropt-wrapper
A-CGray Jul 23, 2025
b6b8d23
Allow instances of `OptTest` to implement a `setup_optimizer` method
A-CGray Jul 23, 2025
959f9d6
Remove Error class that no longer exists
A-CGray Jul 23, 2025
bb632d3
Add comprehensive testing of all ParOpt variants
A-CGray Jul 23, 2025
ea5d54b
Improve tests
A-CGray Jul 23, 2025
9c4e2dd
typo
A-CGray Jul 23, 2025
32fb778
Fix mpi import stuff
A-CGray Jul 23, 2025
63740e7
Reword docs
A-CGray Jul 23, 2025
83588cc
isort
A-CGray Jul 23, 2025
4bd1816
Fix init args
A-CGray Jul 23, 2025
fd585a0
Flake8 issues
A-CGray Jul 23, 2025
38d5050
Move testing utils to `pyoptsparse.testing`
A-CGray Jul 29, 2025
22f0580
Remove ParOpt from optimizer testing
A-CGray Jul 29, 2025
d36d430
`isort .`
A-CGray Jul 29, 2025
d7dcad8
Revert "Fix mpi import stuff"
A-CGray Jul 30, 2025
541e9b8
Small fix
A-CGray Jul 30, 2025
9aa403f
Update docs
A-CGray Jul 30, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 16 additions & 5 deletions doc/optimizers/ParOpt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,27 @@

ParOpt
======
ParOpt is a nonlinear interior point optimizer that is designed for large parallel design optimization problems with structured sparse constraints.
ParOpt is open source and can be downloaded at `https://github.com/smdogroup/paropt <https://github.com/smdogroup/paropt>`_.
Documentation and examples for ParOpt can be found at `https://smdogroup.github.io/paropt/ <https://smdogroup.github.io/paropt/>`_.
The version of ParOpt supported is v2.0.2.
ParOpt is an open source package that implements trust-region, interior points, and MMA optimization algorithms.
The ParOpt optimizers are themselves MPI parallel, which allows them to scale to large problems.
Unlike other optimizers supported by pyOptSparse, as of ParOpt version 2.1.5 and later, the pyOptSparse interface to ParOpt is a part of ParOpt itself.
Maintaining the wrapper, and control of which versions of pyOptSparse are compatible with which versions of ParOpt, is therefore the responsibility of the ParOpt developers.
ParOpt can be downloaded at `<https://github.com/smdogroup/paropt>`_.
Documentation and examples can be found at `<https://smdogroup.github.io/paropt/>`_.
The wrapper code in pyOptSparse is minimal, simply allowing the ParOpt wrapper to be used in the same way as other optimizers in pyOptSparse, through the ``OPT`` method.

The ParOpt wrapper takes a ``sparse`` argument, which controls whether ParOpt uses sparse or dense storage for the constraint Jacobian.
The default is ``True``, which uses sparse storage, but is incompatible with ParOpt's trust-region algorithm.
If you want to use the trust-region algorithm, you must set ``sparse=False``, e.g.:

.. code-block:: python

from pyoptsparse import OPT
opt = OPT("ParOpt", sparse=False)

Installation
------------
Please follow the instructions `here <https://smdogroup.github.io/paropt/>`_ to install ParOpt as a separate Python package.
Make sure that the package is named ``paropt`` and the installation location can be found by Python, so that ``from paropt import ParOpt`` works within the pyOptSparse folder.
This typically requires installing it in a location which is already present under ``$PYTHONPATH`` environment variable, or you can modify the ``.bashrc`` file and manually append the path.

Options
-------
Expand Down
1 change: 1 addition & 0 deletions pyoptsparse/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from .pyOpt_optimization import Optimization
from .pyOpt_optimizer import Optimizer, OPT, Optimizers, list_optimizers
from .pyOpt_solution import Solution
from . import testing

# Now import all the individual optimizers
from .pySNOPT.pySNOPT import SNOPT
Expand Down
286 changes: 23 additions & 263 deletions pyoptsparse/pyParOpt/ParOpt.py
Original file line number Diff line number Diff line change
@@ -1,263 +1,23 @@
# Standard Python modules
import datetime
import os
import time

# External modules
import numpy as np

# Local modules
from ..pyOpt_optimizer import Optimizer
from ..pyOpt_utils import INFINITY, try_import_compiled_module_from_path

# Attempt to import ParOpt/mpi4py
# If PYOPTSPARSE_REQUIRE_MPI is set to a recognized positive value, attempt import
# and raise exception on failure. If set to anything else, no import is attempted.
if "PYOPTSPARSE_REQUIRE_MPI" in os.environ and os.environ["PYOPTSPARSE_REQUIRE_MPI"].lower() not in [
"always",
"1",
"true",
"yes",
]:
_ParOpt = "ParOpt was not imported, as requested by the environment variable 'PYOPTSPARSE_REQUIRE_MPI'"
MPI = "mpi4py was not imported, as requested by the environment variable 'PYOPTSPARSE_REQUIRE_MPI'"
# If PYOPTSPARSE_REQUIRE_MPI is unset, attempt to import mpi4py.
# Since ParOpt requires mpi4py, if either _ParOpt or mpi4py is unavailable
# we disable the optimizer.
else:
_ParOpt = try_import_compiled_module_from_path("paropt.ParOpt")
MPI = try_import_compiled_module_from_path("mpi4py.MPI")


class ParOpt(Optimizer):
"""
ParOpt optimizer class

ParOpt has the capability to handle distributed design vectors.
This is not replicated here since pyOptSparse does not have the
capability to handle this type of design problem.
"""

def __init__(self, raiseError=True, options={}):
name = "ParOpt"
category = "Local Optimizer"
for mod in [_ParOpt, MPI]:
if isinstance(mod, str) and raiseError:
raise ImportError(mod)

# Create and fill-in the dictionary of default option values
self.defOpts = {}
paropt_default_options = _ParOpt.getOptionsInfo()
# Manually override the options with missing default values
paropt_default_options["ip_checkpoint_file"].default = "default.out"
paropt_default_options["problem_name"].default = "problem"
for option_name in paropt_default_options:
# Get the type and default value of the named argument
_type = None
if paropt_default_options[option_name].option_type == "bool":
_type = bool
elif paropt_default_options[option_name].option_type == "int":
_type = int
elif paropt_default_options[option_name].option_type == "float":
_type = float
else:
_type = str
default_value = paropt_default_options[option_name].default

# Set the entry into the dictionary
self.defOpts[option_name] = [_type, default_value]

self.set_options = {}
self.informs = {}
super().__init__(name, category, defaultOptions=self.defOpts, informs=self.informs, options=options)

# ParOpt requires a dense Jacobian format
self.jacType = "dense2d"

return

def __call__(
self, optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True
):
"""
This is the main routine used to solve the optimization
problem.

Parameters
----------
optProb : Optimization or Solution class instance
This is the complete description of the optimization problem
to be solved by the optimizer

sens : str or python Function.
Specifiy method to compute sensitivities. To
explictly use pyOptSparse gradient class to do the
derivatives with finite differenes use \'FD\'. \'sens\'
may also be \'CS\' which will cause pyOptSpare to compute
the derivatives using the complex step method. Finally,
\'sens\' may be a python function handle which is expected
to compute the sensitivities directly. For expensive
function evaluations and/or problems with large numbers of
design variables this is the preferred method.

sensStep : float
Set the step size to use for design variables. Defaults to
1e-6 when sens is \'FD\' and 1e-40j when sens is \'CS\'.

sensMode : str
Use \'pgc\' for parallel gradient computations. Only
available with mpi4py and each objective evaluation is
otherwise serial

storeHistory : str
File name of the history file into which the history of
this optimization will be stored

hotStart : str
File name of the history file to "replay" for the
optimziation. The optimization problem used to generate
the history file specified in \'hotStart\' must be
**IDENTICAL** to the currently supplied \'optProb\'. By
identical we mean, **EVERY SINGLE PARAMETER MUST BE
IDENTICAL**. As soon as he requested evaluation point
from ParOpt does not match the history, function and
gradient evaluations revert back to normal evaluations.

storeSens : bool
Flag sepcifying if sensitivities are to be stored in hist.
This is necessay for hot-starting only.
"""
self.startTime = time.time()
self.callCounter = 0
self.storeSens = storeSens

if len(optProb.constraints) == 0:
# If the problem is unconstrained, add a dummy constraint.
self.unconstrained = True
optProb.dummyConstraint = True

# Save the optimization problem and finalize constraint
# Jacobian, in general can only do on root proc
self.optProb = optProb
self.optProb.finalize()
# Set history/hotstart
self._setHistory(storeHistory, hotStart)
self._setInitialCacheValues()
self._setSens(sens, sensStep, sensMode)
blx, bux, xs = self._assembleContinuousVariables()
xs = np.maximum(xs, blx)
xs = np.minimum(xs, bux)

# The number of design variables
n = len(xs)

oneSided = True

if self.unconstrained:
m = 0
else:
indices, blc, buc, fact = self.optProb.getOrdering(["ne", "le", "ni", "li"], oneSided=oneSided)
m = len(indices)
self.optProb.jacIndices = indices
self.optProb.fact = fact
self.optProb.offset = buc

if self.optProb.comm.rank == 0:

class Problem(_ParOpt.Problem):
def __init__(self, ptr, n, m, xs, blx, bux):
super().__init__(MPI.COMM_SELF, nvars=n, ncon=m)
self.ptr = ptr
self.n = n
self.m = m
self.xs = xs
self.blx = blx
self.bux = bux
self.fobj = 0.0
return

def getVarsAndBounds(self, x, lb, ub):
"""Get the variable values and bounds"""
# Find the average distance between lower and upper bound
bound_sum = 0.0
for i in range(len(x)):
if self.blx[i] <= -INFINITY or self.bux[i] >= INFINITY:
bound_sum += 1.0
else:
bound_sum += self.bux[i] - self.blx[i]
bound_sum = bound_sum / len(x)

for i in range(len(x)):
x[i] = self.xs[i]
lb[i] = self.blx[i]
ub[i] = self.bux[i]
if self.xs[i] <= self.blx[i]:
x[i] = self.blx[i] + 0.5 * np.min((bound_sum, self.bux[i] - self.blx[i]))
elif self.xs[i] >= self.bux[i]:
x[i] = self.bux[i] - 0.5 * np.min((bound_sum, self.bux[i] - self.blx[i]))

return

def evalObjCon(self, x):
"""Evaluate the objective and constraint values"""
fobj, fcon, fail = self.ptr._masterFunc(x[:], ["fobj", "fcon"])
self.fobj = fobj
return fail, fobj, -fcon

def evalObjConGradient(self, x, g, A):
"""Evaluate the objective and constraint gradients"""
gobj, gcon, fail = self.ptr._masterFunc(x[:], ["gobj", "gcon"])
g[:] = gobj[:]
for i in range(self.m):
A[i][:] = -gcon[i][:]
return fail

optTime = MPI.Wtime()

# Optimize the problem
problem = Problem(self, n, m, xs, blx, bux)
optimizer = _ParOpt.Optimizer(problem, self.set_options)
optimizer.optimize()
x, z, zw, zl, zu = optimizer.getOptimizedPoint()

# Set the total opt time
optTime = MPI.Wtime() - optTime

# Get the obective function value
fobj = problem.fobj

if self.storeHistory:
self.metadata["endTime"] = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self.metadata["optTime"] = optTime
self.hist.writeData("metadata", self.metadata)
self.hist.close()

# Create the optimization solution. Note that the signs on the multipliers
# are switch since ParOpt uses a formulation with c(x) >= 0, while pyOpt
# uses g(x) = -c(x) <= 0. Therefore the multipliers are reversed.
sol_inform = {"value": "", "text": ""}

# If number of constraints is zero, ParOpt returns z as None.
# Thus if there is no constraints, should pass an empty list
# to multipliers instead of z.
if z is not None:
sol = self._createSolution(optTime, sol_inform, fobj, x[:], multipliers=-z)
else:
sol = self._createSolution(optTime, sol_inform, fobj, x[:], multipliers=[])

# Indicate solution finished
self.optProb.comm.bcast(-1, root=0)
else: # We are not on the root process so go into waiting loop:
self._waitLoop()
sol = None

# Communication solution and return
sol = self._communicateSolution(sol)

return sol

def _on_setOption(self, name, value):
"""
Add the value to the set_options dictionary.
"""
self.set_options[name] = value
# First party modules
from pyoptsparse.pyOpt_optimizer import Optimizer

try:
# External modules
from paropt.paropt_pyoptsparse import ParOptSparse as ParOpt
except ImportError:

class ParOpt(Optimizer):
def __init__(self, raiseError=True, options={}):
name = "ParOpt"
category = "Local Optimizer"
self.defOpts = {}
self.informs = {}
super().__init__(
name,
category,
defaultOptions=self.defOpts,
informs=self.informs,
options=options,
)
if raiseError:
raise ImportError("There was an error importing ParOpt")
1 change: 1 addition & 0 deletions pyoptsparse/testing/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .pyOpt_testing import *
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def get_dict_distance(d, d2):
"PSQP": {"IFILE": ".out"},
"CONMIN": {"IFILE": ".out"},
"NLPQLP": {"iFile": ".out"},
"ParOpt": {"output_file": ".out"},
"ParOpt": {"output_file": ".out", "tr_output_file": ".tr", "mma_output_file": ".mma"},
"ALPSO": {"filename": ".out"},
"NSGA2": {},
}
Expand Down Expand Up @@ -236,7 +236,10 @@ def optimize(self, sens=None, setDV=None, optOptions=None, storeHistory=False, h
optOptions = self.update_OptOptions_output(optOptions)
# Optimizer
try:
opt = OPT(self.optName, options=optOptions)
if hasattr(self, "setup_optimizer"):
opt = self.setup_optimizer(optOptions=optOptions)
else:
opt = OPT(self.optName, options=optOptions)
self.optVersion = opt.version
except ImportError as e:
if self.optName in DEFAULT_OPTIMIZERS:
Expand Down
7 changes: 2 additions & 5 deletions tests/test_hs015.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,7 @@

# First party modules
from pyoptsparse import OPT, History, Optimization

# Local modules
from testing_utils import OptTest
from pyoptsparse.testing import OptTest


class TestHS15(OptTest):
Expand Down Expand Up @@ -47,7 +45,6 @@ class TestHS15(OptTest):
"SLSQP": 1e-5,
"NLPQLP": 1e-12,
"IPOPT": 1e-4,
"ParOpt": 1e-6,
"CONMIN": 1e-10,
"PSQP": 5e-12,
}
Expand Down Expand Up @@ -119,7 +116,7 @@ def test_snopt(self):
# sol_xvars = [sol.variables["xvars"][i].value for i in range(2)]
# assert_allclose(sol_xvars, dv["xvars"], atol=tol, rtol=tol)

@parameterized.expand(["SLSQP", "PSQP", "CONMIN", "NLPQLP", "ParOpt"])
@parameterized.expand(["SLSQP", "PSQP", "CONMIN", "NLPQLP"])
def test_optimization(self, optName):
self.optName = optName
self.setup_optProb()
Expand Down
Loading
Loading