Skip to content

Rescaled tolerances#93

Open
maxpaik16 wants to merge 26 commits intopolyfem:mainfrom
maxpaik16:rescaled-tolerances
Open

Rescaled tolerances#93
maxpaik16 wants to merge 26 commits intopolyfem:mainfrom
maxpaik16:rescaled-tolerances

Conversation

@maxpaik16
Copy link
Contributor

Added functions to nonlinear Problem class to allow user-specified norms with appropriate rescaling functions

Updated nonlinear solver and line search logic to use these norms and custom rescalings

@codecov
Copy link

codecov bot commented Feb 27, 2026

Codecov Report

❌ Patch coverage is 80.35714% with 22 lines in your changes missing coverage. Please review.
✅ Project coverage is 81.55%. Comparing base (5d62b80) to head (b1e9a40).

Files with missing lines Patch % Lines
src/polysolve/nonlinear/Solver.cpp 80.00% 11 Missing ⚠️
src/polysolve/nonlinear/Criteria.cpp 61.11% 7 Missing ⚠️
src/polysolve/linear/AMGCL.hpp 0.00% 1 Missing ⚠️
src/polysolve/linear/EigenSolver.hpp 0.00% 1 Missing ⚠️
src/polysolve/linear/HypreSolver.hpp 0.00% 1 Missing ⚠️
src/polysolve/linear/Solver.hpp 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #93      +/-   ##
==========================================
- Coverage   81.67%   81.55%   -0.13%     
==========================================
  Files          49       49              
  Lines        2025     2082      +57     
  Branches      269      271       +2     
==========================================
+ Hits         1654     1698      +44     
- Misses        371      384      +13     
Flag Coverage Δ
polysolve 81.55% <80.35%> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.


virtual double compute_grad_norm(const Problem &objFunc, const TVector &x, const TVector &grad) const
{
return objFunc.grad_norm(grad, norm_type_);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe so. Is there an issue I am overlooking?

default_init_step_size = params["line_search"]["default_init_step_size"];
step_ratio = params["line_search"]["step_ratio"];

try_interpolating_step = params["line_search"]["try_interpolating_step"];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why adding if not implemented?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted

virtual double step_norm_rescaling(const std::string &norm_type) const {return 1;}
virtual double energy_norm_rescaling(const std::string &norm_type) const {return 1;}

virtual double grad_norm(const TVector &x, const std::string &norm_type) const {return x.norm();}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is super misleading. x is the solution, now it is the grad. which one is it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

/// @return True if the solver should stop, false otherwise.
virtual bool stop(const TVector &x) { return false; }

virtual double grad_norm_rescaling(const std::string &norm_type) const {return 1;}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

string? enum?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed. I use an enum now. It feels a bit awkward as it seems like something that would be more natural to define in the code defining the problem completely (e.g., PolyFEM), but I don't see a better solution at the moment.

double grad_norm_rescaling = 1;
bool try_interpolating_step;
double rel_interpolation_accuracy_tol = 0;
std::string norm_type = "Euclidean";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where are these set?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solver::set_line_search

bool is_converged_status(const Status s)
{
return s == Status::XDeltaTolerance || s == Status::FDeltaTolerance || s == Status::GradNormTolerance;
return s == Status::XDeltaTolerance || s == Status::FDeltaTolerance || s == Status::GradNormTolerance || s == Status::NewtonDecrementTolerance || s == Status::RelGradNormTolerance || s == Status::RelXDeltaTolerance;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why newton here? this interface is more generic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Newton decrement is a generic estimate of how close you are to the optimum and applies for non-Newton methods.


m_descent_strategy = 0;

norm_type_ = solver_params["norm_type"];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the style here is m_

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

},
{
"pointer": "/norm_type",
"default": "L2",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use options to enumearte all. in the code is called euclidean

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

{
m_logger.debug("[{}] large (or nan) linear solve residual {}>{} (‖∇f‖={})",
name(), residual, residual_tolerance * characteristic_length, grad.norm());
name(), residual, current_residual_tolerance, objFunc.grad_norm(grad, "L2"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L2 hardcoded

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

virtual double step_norm_rescaling(const std::string &norm_type) const {return 1;}
virtual double energy_norm_rescaling(const std::string &norm_type) const {return 1;}

virtual double grad_norm(const TVector &x, const std::string &norm_type) const {return x.norm();}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also why is the problem resposible for this?

@maxpaik16 maxpaik16 force-pushed the rescaled-tolerances branch from cbedb97 to f842fd4 Compare March 14, 2026 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants