Volume 2

The second volume of the Journal of Nonsmooth Analysis and Optimization (2021)


1. Relations between Abs-Normal NLPs and MPCCs. Part 1: Strong Constraint Qualifications

Lisa C. Hegerhorst-Schultchen ; Christian Kirches ; Marc C. Steinbach.
This work is part of an ongoing effort of comparing non-smooth optimization problems in abs-normal form to MPCCs. We study the general abs-normal NLP with equality and inequality constraints in relation to an equivalent MPCC reformulation. We show that kink qualifications and MPCC constraint qualifications of linear independence type and Mangasarian-Fromovitz type are equivalent. Then we consider strong stationarity concepts with first and second order optimality conditions, which again turn out to be equivalent for the two problem classes. Throughout we also consider specific slack reformulations suggested in [9], which preserve constraint qualifications of linear independence type but not of Mangasarian-Fromovitz type.
Section: Original research articles

2. Relations between Abs-Normal NLPs and MPCCs. Part 2: Weak Constraint Qualifications

Lisa C. Hegerhorst-Schultchen ; Christian Kirches ; Marc C. Steinbach.
This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie's and Guignard's kink qualification and prove relations to MPCC-ACQ and MPCC-GCQ for the counterpart MPCC formulations. Due to non-uniqueness of a specific slack reformulation suggested in [10], the relations are non-trivial. It turns out that constraint qualifications of Abadie type are preserved. We also prove the weaker result that equivalence of Guginard's (and Abadie's) constraint qualifications for all branch problems hold, while the question of GCQ preservation remains open. Finally, we introduce M-stationarity and B-stationarity concepts for abs-normal NLPs and prove first order optimality conditions corresponding to MPCC counterpart formulations.
Section: Original research articles

3. Uniform Regularity of Set-Valued Mappings and Stability of Implicit Multifunctions

Nguyen Duy Cuong ; Alexander Y. Kruger.
We propose a unifying general (i.e. not assuming the mapping to have any particular structure) view on the theory of regularity and clarify the relationships between the existing primal and dual quantitative sufficient and necessary conditions including their hierarchy. We expose the typical sequence of regularity assertions, often hidden in the proofs, and the roles of the assumptions involved in the assertions, in particular, on the underlying space: general metric, normed, Banach or Asplund. As a consequence, we formulate primal and dual conditions for the stability properties of solution mappings to inclusions
Section: Original research articles

4. On inner calmness*, generalized calculus, and derivatives of the normal cone mapping

Matúš Benko.
In this paper, we study continuity and Lipschitzian properties of set-valued mappings, focusing on inner-type conditions. We introduce new notions of inner calmness* and, its relaxation, fuzzy inner calmness*. We show that polyhedral maps enjoy inner calmness* and examine (fuzzy) inner calmness* of a multiplier mapping associated with constraint systems in depth. Then we utilize these notions to develop some new rules of generalized differential calculus, mainly for the primal objects (e.g. tangent cones). In particular, we propose an exact chain rule for graphical derivatives. We apply these results to compute the derivatives of the normal cone mapping, essential e.g. for sensitivity analysis of variational inequalities.
Section: Original research articles

5. On implicit variables in optimization theory

Matúš Benko ; Patrick Mehlitz.
Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory.
Section: Original research articles

6. Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

Antonio Silveti-Falls ; Cesare Molinari ; Jalal Fadili.
In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in [25], which we denote ICGALP , that allow for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion that facilitates the practical application of the algorithm to computationally intensive settings, e.g., in high (or possibly infinite) dimensional Hilbert spaces commonly found in machine learning problems. The algorithm is able to solve composite minimization problems involving the sum of three convex proper lower-semicontinuous functions subject to an affine constraint of the form Ax = b for some bounded linear operator A. Only one of the functions in the objective is assumed to be differentiable, the other two are assumed to have an accessible proximal operator and a linear minimization oracle. As main results, we show convergence of the Lagrangian values (so-called convergence in the Bregman sense) and asymptotic feasibility of the affine constraint as well as strong convergence of the sequence of dual variables to a solution of the dual problem, in an almost sure sense. Almost sure convergence rates are given for the Lagrangian values and the feasibility gap for the ergodic primal variables. Rates in expectation are given for the Lagrangian values and the feasibility gap subsequentially in the pointwise sense. Numerical experiments […]
Section: Original research articles

7. A new elementary proof for M-stationarity under MPCC-GCQ for mathematical programs with complementarity constraints

Felix Harder.
It is known in the literature that local minimizers of mathematical programs with complementarity constraints (MPCCs) are so-called M-stationary points, if a weak MPCC-tailored Guignard constraint qualification (called MPCC-GCQ) holds. In this paper we present a new elementary proof for this result. Our proof is significantly simpler than existing proofs and does not rely on deeper technical theory such as calculus rules for limiting normal cones. A crucial ingredient is a proof of a (to the best of our knowledge previously open) conjecture, which was formulated in a Diploma thesis by Schinabeck.
Section: Original research articles

8. Optimal Control of Plasticity with Inertia

Stephan Walther.
The paper is concerned with an optimal control problem governed by the equations of elasto plasticity with linear kinematic hardening and the inertia term at small strain. The objective is to optimize the displacement field and plastic strain by controlling volume forces. The idea given in [10] is used to transform the state equation into an evolution variational inequality (EVI) involving a certain maximal monotone operator. Results from [27] are then used to analyze the EVI. A regularization is obtained via the Yosida approximation of the maximal monotone operator, this approximation is smoothed further to derive optimality conditions for the smoothed optimal control problem.
Section: Original research articles