Volume 4

The fourth volume of the Journal of Nonsmooth Analysis and Optimization (2023)


1. Proximal gradient methods beyond monotony

Alberto De Marchi.
We address composite optimization problems, which consist in minimizing thesum of a smooth and a merely lower semicontinuous function, without anyconvexity assumptions. Numerical solutions of these problems can be obtained byproximal gradient methods, which often rely on a line search procedure asglobalization mechanism. We consider an adaptive nonmonotone proximal gradientscheme based on an averaged merit function and establish asymptotic convergenceguarantees under weak assumptions, delivering results on par with the monotonestrategy. Global worst-case rates for the iterates and a stationarity measureare also derived. Finally, a numerical example indicates the potential ofnonmonotonicity and spectral approximations.

2. On Convergence of Binary Trust-Region Steepest Descent

Paul Manns ; Mirko Hahn ; Christian Kirches ; Sven Leyffer ; Sebastian Sager.
Binary trust-region steepest descent (BTR) and combinatorial integralapproximation (CIA) are two recently investigated approaches for the solutionof optimization problems with distributed binary-/discrete-valued variables(control functions). We show improved convergence results for BTR by imposing acompactness assumption that is similar to the convergence theory of CIA. As acorollary we conclude that BTR also constitutes a descent algorithm on thecontinuous relaxation and its iterates converge weakly-$^*$ to stationarypoints of the latter. We provide computational results that validate ourfindings. In addition, we observe a regularizing effect of BTR, which weexplore by means of a hybridization of CIA and BTR.

3. Optimal Control of a Viscous Two-Field Damage Model with Fatigue

Livia Betz.
Motivated by fatigue damage models, this paper addresses optimal control problems governed by a non-smooth system featuring two non-differentiable mappings. This consists of a coupling between a doubly non-smooth history-dependent evolution and an elliptic PDE. After proving the directional differentiability of the associated solution mapping, an optimality system which is stronger than the one obtained by classical smoothening procedures is derived. If one of the non-differentiable mappings becomes smooth, the optimality conditions are of strong stationary type, i.e., equivalent to the primal necessary optimality condition.

4. Proximal methods for point source localisation

Tuomo Valkonen.
Point source localisation is generally modelled as a Lasso-type problem onmeasures. However, optimisation methods in non-Hilbert spaces, such as thespace of Radon measures, are much less developed than in Hilbert spaces. Mostnumerical algorithms for point source localisation are based on the Frank-Wolfeconditional gradient method, for which ad hoc convergence theory is developed.We develop extensions of proximal-type methods to spaces of measures. Thisincludes forward-backward splitting, its inertial version, and primal-dualproximal splitting. Their convergence proofs follow standard patterns. Wedemonstrate their numerical efficacy.