<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <title>Journal of Nonsmooth Analysis and Optimization - Latest Publications</title>
    <description>Latest articles</description>
    
    <pubDate>Sat, 14 Mar 2026 03:07:24 +0000</pubDate>
    <generator>episciences.org</generator>
    <link>https://jnsao.episciences.org</link>
    <author>Journal of Nonsmooth Analysis and Optimization</author>
    <dc:creator>Journal of Nonsmooth Analysis and Optimization</dc:creator>
    <atom:link rel="self" type="application/rss+xml" href="https://jnsao.episciences.org/rss/papers"/>
    <atom:link rel="hub" href="http://pubsubhubbub.appspot.com/"/>
    <item>
      <title>A penalty barrier framework for nonconvex constrained optimization</title>
      <description><![CDATA[We consider minimization problems with structured objective function and smooth constraints, and present a flexible framework that combines the beneficial regularization effects of (exact) penalty and interior-point methods. In the fully nonconvex setting, a pure barrier approach requires careful steps when approaching the infeasible set, thus hindering convergence. We show how a tight integration with a penalty scheme mitigates this issue and enables the construction of subproblems whose domain is independent of the explicit constraints. This decoupling allows us to leverage efficient solvers designed for unconstrained or suitably structured optimization tasks. The key behind all this is a marginalization step: closely related to a conjugacy operation, this step effectively merges (exact) penalty and barrier into a smooth, full domain functional object. When the penalty exactness takes effect, the generated subproblems do not suffer the ill-conditioning typical of barrier methods, nor do they exhibit the nonsmoothness of exact penalty terms. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems. Stronger conclusions are available for the convex setting, where optimality can be guaranteed. Illustrative examples and numerical simulations demonstrate the wide range of problems our theory and algorithm are able to cover.]]></description>
      <pubDate>Tue, 19 Aug 2025 09:44:45 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2025-14585</link>
      <guid>https://doi.org/10.46298/jnsao-2025-14585</guid>
      <author>De Marchi, Alberto</author>
      <author>Themelis, Andreas</author>
      <dc:creator>De Marchi, Alberto</dc:creator>
      <dc:creator>Themelis, Andreas</dc:creator>
      <content:encoded><![CDATA[We consider minimization problems with structured objective function and smooth constraints, and present a flexible framework that combines the beneficial regularization effects of (exact) penalty and interior-point methods. In the fully nonconvex setting, a pure barrier approach requires careful steps when approaching the infeasible set, thus hindering convergence. We show how a tight integration with a penalty scheme mitigates this issue and enables the construction of subproblems whose domain is independent of the explicit constraints. This decoupling allows us to leverage efficient solvers designed for unconstrained or suitably structured optimization tasks. The key behind all this is a marginalization step: closely related to a conjugacy operation, this step effectively merges (exact) penalty and barrier into a smooth, full domain functional object. When the penalty exactness takes effect, the generated subproblems do not suffer the ill-conditioning typical of barrier methods, nor do they exhibit the nonsmoothness of exact penalty terms. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems. Stronger conclusions are available for the convex setting, where optimality can be guaranteed. Illustrative examples and numerical simulations demonstrate the wide range of problems our theory and algorithm are able to cover.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Single-loop methods for bilevel parameter learning in inverse imaging</title>
      <description><![CDATA[Bilevel optimisation is used in inverse imaging problems for hyperparameter learning/identification and experimental design, for instance, to find optimal regularisation parameters and forward operators. However, computationally, the process is costly. To reduce this cost, recently so-called single-loop approaches have been introduced. On each step of an outer optimisation method, they take just a single gradient step towards the solution of the inner problem. In this paper, we flexibilise the inner algorithm to include standard methods in inverse imaging. Moreover, as we have recently shown, significant performance improvements can be obtained in PDE-constrained optimisation by interweaving the steps of conventional iterative linear system solvers with the optimisation method. We now demonstrate how the adjoint equation in bilevel problems can also benefit from such interweaving. We evaluate the performance of our approach on identifying the deconvolution kernel for image deblurring, and the subsampling operator for magnetic resonance imaging (MRI).]]></description>
      <pubDate>Tue, 05 Aug 2025 08:13:12 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2025-15577</link>
      <guid>https://doi.org/10.46298/jnsao-2025-15577</guid>
      <author>Suonperä, Ensio</author>
      <author>Valkonen, Tuomo</author>
      <dc:creator>Suonperä, Ensio</dc:creator>
      <dc:creator>Valkonen, Tuomo</dc:creator>
      <content:encoded><![CDATA[Bilevel optimisation is used in inverse imaging problems for hyperparameter learning/identification and experimental design, for instance, to find optimal regularisation parameters and forward operators. However, computationally, the process is costly. To reduce this cost, recently so-called single-loop approaches have been introduced. On each step of an outer optimisation method, they take just a single gradient step towards the solution of the inner problem. In this paper, we flexibilise the inner algorithm to include standard methods in inverse imaging. Moreover, as we have recently shown, significant performance improvements can be obtained in PDE-constrained optimisation by interweaving the steps of conventional iterative linear system solvers with the optimisation method. We now demonstrate how the adjoint equation in bilevel problems can also benefit from such interweaving. We evaluate the performance of our approach on identifying the deconvolution kernel for image deblurring, and the subsampling operator for magnetic resonance imaging (MRI).]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Second-order conditions for spatio-temporally sparse optimal control via second subderivatives</title>
      <description><![CDATA[We address second-order optimality conditions for optimal control problems involving sparsity functionals which induce spatio-temporal sparsity patterns. We employ the notion of (weak) second subderivatives. With this approach, we are able to reproduce the results from Casas, Herzog, and Wachsmuth (ESAIM COCV, 23, 2017, p. 263-295). Our analysis yields a slight improvement of one of these results and also opens the door for the sensitivity analysis of this class of problems.]]></description>
      <pubDate>Wed, 18 Dec 2024 10:49:50 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2024-12604</link>
      <guid>https://doi.org/10.46298/jnsao-2024-12604</guid>
      <author>Borchard, Nicolas</author>
      <author>Wachsmuth, Gerd</author>
      <dc:creator>Borchard, Nicolas</dc:creator>
      <dc:creator>Wachsmuth, Gerd</dc:creator>
      <content:encoded><![CDATA[We address second-order optimality conditions for optimal control problems involving sparsity functionals which induce spatio-temporal sparsity patterns. We employ the notion of (weak) second subderivatives. With this approach, we are able to reproduce the results from Casas, Herzog, and Wachsmuth (ESAIM COCV, 23, 2017, p. 263-295). Our analysis yields a slight improvement of one of these results and also opens the door for the sensitivity analysis of this class of problems.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A topological derivative-based algorithm to solve optimal control problems with $L^0(\Omega)$ control cost</title>
      <description><![CDATA[In this paper, we consider optimization problems with $L^0$-cost of the controls. Here, we take the support of the control as independent optimization variable. Topological derivatives of the corresponding value function with respect to variations of the support are derived. These topological derivatives are used in a novel gradient descent algorithm with Armijo line-search. Under suitable assumptions, the algorithm produces a minimizing sequence.]]></description>
      <pubDate>Wed, 26 Jun 2024 08:54:55 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2024-12366</link>
      <guid>https://doi.org/10.46298/jnsao-2024-12366</guid>
      <author>Wachsmuth, Daniel</author>
      <dc:creator>Wachsmuth, Daniel</dc:creator>
      <content:encoded><![CDATA[In this paper, we consider optimization problems with $L^0$-cost of the controls. Here, we take the support of the control as independent optimization variable. Topological derivatives of the corresponding value function with respect to variations of the support are derived. These topological derivatives are used in a novel gradient descent algorithm with Armijo line-search. Under suitable assumptions, the algorithm produces a minimizing sequence.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Local properties and augmented Lagrangians in fully nonconvex composite optimization</title>
      <description><![CDATA[A broad class of optimization problems can be cast in composite form, that is, considering the minimization of the composition of a lower semicontinuous function with a differentiable mapping. This paper investigates the versatile template of composite optimization without any convexity assumptions. First- and second-order optimality conditions are discussed. We highlight the difficulties that stem from the lack of convexity when dealing with necessary conditions in a Lagrangian framework and when considering error bounds. Building upon these characterizations, a local convergence analysis is delineated for a recently developed augmented Lagrangian method, deriving rates of convergence in the fully nonconvex setting.]]></description>
      <pubDate>Thu, 16 May 2024 10:18:36 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2024-12235</link>
      <guid>https://doi.org/10.46298/jnsao-2024-12235</guid>
      <author>De Marchi, Alberto</author>
      <author>Mehlitz, Patrick</author>
      <dc:creator>De Marchi, Alberto</dc:creator>
      <dc:creator>Mehlitz, Patrick</dc:creator>
      <content:encoded><![CDATA[A broad class of optimization problems can be cast in composite form, that is, considering the minimization of the composition of a lower semicontinuous function with a differentiable mapping. This paper investigates the versatile template of composite optimization without any convexity assumptions. First- and second-order optimality conditions are discussed. We highlight the difficulties that stem from the lack of convexity when dealing with necessary conditions in a Lagrangian framework and when considering error bounds. Building upon these characterizations, a local convergence analysis is delineated for a recently developed augmented Lagrangian method, deriving rates of convergence in the fully nonconvex setting.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Input Regularization for Integer Optimal Control in BV with Applications to Control of Poroelastic and Poroviscoelastic Systems</title>
      <description><![CDATA[We revisit a class of integer optimal control problems for which a trust-region method has been proposed and analyzed in arXiv:2106.13453v3 [math.OC]. While the algorithm proposed in arXiv:2106.13453v3 [math.OC] successfully solves the class of optimization problems under consideration, its convergence analysis requires restrictive regularity assumptions. There are many examples of integer optimal control problems involving partial differential equations where these regularity assumptions are not satisfied. In this article we provide a way to bypass the restrictive regularity assumptions by introducing an additional partial regularization of the control inputs by means of mollification and proving a $\Gamma$-convergence-type result when the support parameter of the mollification is driven to zero. We highlight the applicability of this theory in the case of fluid flows through deformable porous media equations that arise in biomechanics. We show that the regularity assumptions are violated in the case of poro-visco-elastic systems, and thus one needs to use the regularization of the control input introduced in this article. Associated numerical results show that while the homotopy can help to find better objective values and points of lower instationarity, the practical performance of the algorithm without the input regularization may be on par with the homotopy.]]></description>
      <pubDate>Mon, 29 Apr 2024 10:47:10 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2024-10529</link>
      <guid>https://doi.org/10.46298/jnsao-2024-10529</guid>
      <author>Bociu, Lorena</author>
      <author>Manns, Paul</author>
      <author>Severitt, Marvin</author>
      <author>Strikwerda, Sarah</author>
      <dc:creator>Bociu, Lorena</dc:creator>
      <dc:creator>Manns, Paul</dc:creator>
      <dc:creator>Severitt, Marvin</dc:creator>
      <dc:creator>Strikwerda, Sarah</dc:creator>
      <content:encoded><![CDATA[We revisit a class of integer optimal control problems for which a trust-region method has been proposed and analyzed in arXiv:2106.13453v3 [math.OC]. While the algorithm proposed in arXiv:2106.13453v3 [math.OC] successfully solves the class of optimization problems under consideration, its convergence analysis requires restrictive regularity assumptions. There are many examples of integer optimal control problems involving partial differential equations where these regularity assumptions are not satisfied. In this article we provide a way to bypass the restrictive regularity assumptions by introducing an additional partial regularization of the control inputs by means of mollification and proving a $\Gamma$-convergence-type result when the support parameter of the mollification is driven to zero. We highlight the applicability of this theory in the case of fluid flows through deformable porous media equations that arise in biomechanics. We show that the regularity assumptions are violated in the case of poro-visco-elastic systems, and thus one needs to use the regularization of the control input introduced in this article. Associated numerical results show that while the homotopy can help to find better objective values and points of lower instationarity, the practical performance of the algorithm without the input regularization may be on par with the homotopy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Generalized Alternating Projections on Manifolds and Convex Sets</title>
      <description><![CDATA[In this paper, we extend the previous convergence results for the generalized alternating projection method applied to subspaces in [arXiv:1703.10547] to hold also for smooth manifolds. We show that the algorithm locally behaves similarly in the subspace and manifold settings and that the same rates are obtained. We also present convergence rate results for when the algorithm is applied to non-empty, closed, and convex sets. The results are based on a finite identification property that implies that the algorithm after an initial identification phase solves a smooth manifold feasibility problem. Therefore, the rates in this paper hold asymptotically for problems in which this identification property is satisfied. We present a few examples where this is the case and also a counter example for when this is not.]]></description>
      <pubDate>Tue, 09 Apr 2024 09:23:24 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2023-7139</link>
      <guid>https://doi.org/10.46298/jnsao-2023-7139</guid>
      <author>Fält, Mattias</author>
      <author>Giselsson, Pontus</author>
      <dc:creator>Fält, Mattias</dc:creator>
      <dc:creator>Giselsson, Pontus</dc:creator>
      <content:encoded><![CDATA[In this paper, we extend the previous convergence results for the generalized alternating projection method applied to subspaces in [arXiv:1703.10547] to hold also for smooth manifolds. We show that the algorithm locally behaves similarly in the subspace and manifold settings and that the same rates are obtained. We also present convergence rate results for when the algorithm is applied to non-empty, closed, and convex sets. The results are based on a finite identification property that implies that the algorithm after an initial identification phase solves a smooth manifold feasibility problem. Therefore, the rates in this paper hold asymptotically for problems in which this identification property is satisfied. We present a few examples where this is the case and also a counter example for when this is not.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Proximal methods for point source localisation</title>
      <description><![CDATA[Point source localisation is generally modelled as a Lasso-type problem on measures. However, optimisation methods in non-Hilbert spaces, such as the space of Radon measures, are much less developed than in Hilbert spaces. Most numerical algorithms for point source localisation are based on the Frank-Wolfe conditional gradient method, for which ad hoc convergence theory is developed. We develop extensions of proximal-type methods to spaces of measures. This includes forward-backward splitting, its inertial version, and primal-dual proximal splitting. Their convergence proofs follow standard patterns. We demonstrate their numerical efficacy.]]></description>
      <pubDate>Thu, 21 Sep 2023 10:25:31 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2023-10433</link>
      <guid>https://doi.org/10.46298/jnsao-2023-10433</guid>
      <author>Valkonen, Tuomo</author>
      <dc:creator>Valkonen, Tuomo</dc:creator>
      <content:encoded><![CDATA[Point source localisation is generally modelled as a Lasso-type problem on measures. However, optimisation methods in non-Hilbert spaces, such as the space of Radon measures, are much less developed than in Hilbert spaces. Most numerical algorithms for point source localisation are based on the Frank-Wolfe conditional gradient method, for which ad hoc convergence theory is developed. We develop extensions of proximal-type methods to spaces of measures. This includes forward-backward splitting, its inertial version, and primal-dual proximal splitting. Their convergence proofs follow standard patterns. We demonstrate their numerical efficacy.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal Control of a Viscous Two-Field Damage Model with Fatigue</title>
      <description><![CDATA[Motivated by fatigue damage models, this paper addresses optimal control problems governed by a non-smooth system featuring two non-differentiable mappings. This consists of a coupling between a doubly non-smooth history-dependent evolution and an elliptic PDE. After proving the directional differentiability of the associated solution mapping, an optimality system which is stronger than the one obtained by classical smoothening procedures is derived. If one of the non-differentiable mappings becomes smooth, the optimality conditions are of strong stationary type, i.e., equivalent to the primal necessary optimality condition.]]></description>
      <pubDate>Fri, 11 Aug 2023 09:22:44 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2023-10834</link>
      <guid>https://doi.org/10.46298/jnsao-2023-10834</guid>
      <author>Betz, Livia</author>
      <dc:creator>Betz, Livia</dc:creator>
      <content:encoded><![CDATA[Motivated by fatigue damage models, this paper addresses optimal control problems governed by a non-smooth system featuring two non-differentiable mappings. This consists of a coupling between a doubly non-smooth history-dependent evolution and an elliptic PDE. After proving the directional differentiability of the associated solution mapping, an optimality system which is stronger than the one obtained by classical smoothening procedures is derived. If one of the non-differentiable mappings becomes smooth, the optimality conditions are of strong stationary type, i.e., equivalent to the primal necessary optimality condition.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On Convergence of Binary Trust-Region Steepest Descent</title>
      <description><![CDATA[Binary trust-region steepest descent (BTR) and combinatorial integral approximation (CIA) are two recently investigated approaches for the solution of optimization problems with distributed binary-/discrete-valued variables (control functions). We show improved convergence results for BTR by imposing a compactness assumption that is similar to the convergence theory of CIA. As a corollary we conclude that BTR also constitutes a descent algorithm on the continuous relaxation and its iterates converge weakly-$^*$ to stationary points of the latter. We provide computational results that validate our findings. In addition, we observe a regularizing effect of BTR, which we explore by means of a hybridization of CIA and BTR.]]></description>
      <pubDate>Tue, 25 Jul 2023 12:43:07 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2023-10164</link>
      <guid>https://doi.org/10.46298/jnsao-2023-10164</guid>
      <author>Manns, Paul</author>
      <author>Hahn, Mirko</author>
      <author>Kirches, Christian</author>
      <author>Leyffer, Sven</author>
      <author>Sager, Sebastian</author>
      <dc:creator>Manns, Paul</dc:creator>
      <dc:creator>Hahn, Mirko</dc:creator>
      <dc:creator>Kirches, Christian</dc:creator>
      <dc:creator>Leyffer, Sven</dc:creator>
      <dc:creator>Sager, Sebastian</dc:creator>
      <content:encoded><![CDATA[Binary trust-region steepest descent (BTR) and combinatorial integral approximation (CIA) are two recently investigated approaches for the solution of optimization problems with distributed binary-/discrete-valued variables (control functions). We show improved convergence results for BTR by imposing a compactness assumption that is similar to the convergence theory of CIA. As a corollary we conclude that BTR also constitutes a descent algorithm on the continuous relaxation and its iterates converge weakly-$^*$ to stationary points of the latter. We provide computational results that validate our findings. In addition, we observe a regularizing effect of BTR, which we explore by means of a hybridization of CIA and BTR.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Proximal gradient methods beyond monotony</title>
      <description><![CDATA[We address composite optimization problems, which consist in minimizing the sum of a smooth and a merely lower semicontinuous function, without any convexity assumptions. Numerical solutions of these problems can be obtained by proximal gradient methods, which often rely on a line search procedure as globalization mechanism. We consider an adaptive nonmonotone proximal gradient scheme based on an averaged merit function and establish asymptotic convergence guarantees under weak assumptions, delivering results on par with the monotone strategy. Global worst-case rates for the iterates and a stationarity measure are also derived. Finally, a numerical example indicates the potential of nonmonotonicity and spectral approximations.]]></description>
      <pubDate>Fri, 02 Jun 2023 07:37:51 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2023-10290</link>
      <guid>https://doi.org/10.46298/jnsao-2023-10290</guid>
      <author>De Marchi, Alberto</author>
      <dc:creator>De Marchi, Alberto</dc:creator>
      <content:encoded><![CDATA[We address composite optimization problems, which consist in minimizing the sum of a smooth and a merely lower semicontinuous function, without any convexity assumptions. Numerical solutions of these problems can be obtained by proximal gradient methods, which often rely on a line search procedure as globalization mechanism. We consider an adaptive nonmonotone proximal gradient scheme based on an averaged merit function and establish asymptotic convergence guarantees under weak assumptions, delivering results on par with the monotone strategy. Global worst-case rates for the iterates and a stationarity measure are also derived. Finally, a numerical example indicates the potential of nonmonotonicity and spectral approximations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Second-order conditions for non-uniformly convex integrands: quadratic growth in $L^1$</title>
      <description><![CDATA[We study no-gap second-order optimality conditions for a non-uniformly convex and non-smooth integral functional. The integral functional is extended to the space of measures. The obtained second-order derivatives contain integrals on lower-dimensional manifolds. The proofs utilize the convex pre-conjugate, which is an integral functional on the space of continuous functions. Applications to non-smooth optimal control problems are given.]]></description>
      <pubDate>Mon, 23 May 2022 06:05:10 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2022-8733</link>
      <guid>https://doi.org/10.46298/jnsao-2022-8733</guid>
      <author>Wachsmuth, Daniel</author>
      <author>Wachsmuth, Gerd</author>
      <dc:creator>Wachsmuth, Daniel</dc:creator>
      <dc:creator>Wachsmuth, Gerd</dc:creator>
      <content:encoded><![CDATA[We study no-gap second-order optimality conditions for a non-uniformly convex and non-smooth integral functional. The integral functional is extended to the space of measures. The obtained second-order derivatives contain integrals on lower-dimensional manifolds. The proofs utilize the convex pre-conjugate, which is an integral functional on the space of continuous functions. Applications to non-smooth optimal control problems are given.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of the implicit Euler time-discretization of passive linear descriptor complementarity systems</title>
      <description><![CDATA[This article is largely concerned with the time-discretization of descriptor-variable systems coupled to with complementarity constraints. They are named descriptor-variable linear complementarity systems (DVLCS). More speci cally passive DVLCS with minimal state space representation are studied. The Euler implicit discretization of DVLCS is analysed: the one-step non-smooth problem (OSNSP), that is a generalized equation, is shown to be well-posed under some conditions. Then the convergence of the discretized solutions is studied. Several examples illustrate the applicability and the limitations of the developments.]]></description>
      <pubDate>Thu, 12 May 2022 09:21:48 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2022-7269</link>
      <guid>https://doi.org/10.46298/jnsao-2022-7269</guid>
      <author>Brogliato, Bernard</author>
      <author>Rocca, Alexandre</author>
      <dc:creator>Brogliato, Bernard</dc:creator>
      <dc:creator>Rocca, Alexandre</dc:creator>
      <content:encoded><![CDATA[This article is largely concerned with the time-discretization of descriptor-variable systems coupled to with complementarity constraints. They are named descriptor-variable linear complementarity systems (DVLCS). More speci cally passive DVLCS with minimal state space representation are studied. The Euler implicit discretization of DVLCS is analysed: the one-step non-smooth problem (OSNSP), that is a generalized equation, is shown to be well-posed under some conditions. Then the convergence of the discretized solutions is studied. Several examples illustrate the applicability and the limitations of the developments.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Minimal angle spread in the probability simplex with respect to the uniform distribution</title>
      <description><![CDATA[We compute the minimal angle spread with respect to the uniform distribution in the probability simplex. The resulting optimization problem is analytically solved. The formula provided shows that the minimal angle spread approaches zero as the dimension tends to infinity. We also discuss an application in cognitive science.]]></description>
      <pubDate>Wed, 27 Apr 2022 06:07:46 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2022-7492</link>
      <guid>https://doi.org/10.46298/jnsao-2022-7492</guid>
      <author>Bauschke, Heinz H.</author>
      <author>DiBerardino, Peter A. V.</author>
      <dc:creator>Bauschke, Heinz H.</dc:creator>
      <dc:creator>DiBerardino, Peter A. V.</dc:creator>
      <content:encoded><![CDATA[We compute the minimal angle spread with respect to the uniform distribution in the probability simplex. The resulting optimization problem is analytically solved. The formula provided shows that the minimal angle spread approaches zero as the dimension tends to infinity. We also discuss an application in cognitive science.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>From resolvents to generalized equations and quasi-variational inequalities: existence and differentiability</title>
      <description><![CDATA[We consider a generalized equation governed by a strongly monotone and Lipschitz single-valued mapping and a maximally monotone set-valued mapping in a Hilbert space. We are interested in the sensitivity of solutions w.r.t. perturbations of both mappings. We demonstrate that the directional differentiability of the solution map can be verified by using the directional differentiability of the single-valued operator and of the resolvent of the set-valued mapping. The result is applied to quasi-generalized equations in which we have an additional dependence of the solution within the set-valued part of the equation.]]></description>
      <pubDate>Mon, 10 Jan 2022 09:45:13 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2022-8537</link>
      <guid>https://doi.org/10.46298/jnsao-2022-8537</guid>
      <author>Wachsmuth, Gerd</author>
      <dc:creator>Wachsmuth, Gerd</dc:creator>
      <content:encoded><![CDATA[We consider a generalized equation governed by a strongly monotone and Lipschitz single-valued mapping and a maximally monotone set-valued mapping in a Hilbert space. We are interested in the sensitivity of solutions w.r.t. perturbations of both mappings. We demonstrate that the directional differentiability of the solution map can be verified by using the directional differentiability of the single-valued operator and of the resolvent of the set-valued mapping. The result is applied to quasi-generalized equations in which we have an additional dependence of the solution within the set-valued part of the equation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal Control of Plasticity with Inertia</title>
      <description><![CDATA[The paper is concerned with an optimal control problem governed by the equations of elasto plasticity with linear kinematic hardening and the inertia term at small strain. The objective is to optimize the displacement field and plastic strain by controlling volume forces. The idea given in [10] is used to transform the state equation into an evolution variational inequality (EVI) involving a certain maximal monotone operator. Results from [27] are then used to analyze the EVI. A regularization is obtained via the Yosida approximation of the maximal monotone operator, this approximation is smoothed further to derive optimality conditions for the smoothed optimal control problem.]]></description>
      <pubDate>Mon, 01 Nov 2021 09:17:49 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-7156</link>
      <guid>https://doi.org/10.46298/jnsao-2021-7156</guid>
      <author>Walther, Stephan</author>
      <dc:creator>Walther, Stephan</dc:creator>
      <content:encoded><![CDATA[The paper is concerned with an optimal control problem governed by the equations of elasto plasticity with linear kinematic hardening and the inertia term at small strain. The objective is to optimize the displacement field and plastic strain by controlling volume forces. The idea given in [10] is used to transform the state equation into an evolution variational inequality (EVI) involving a certain maximal monotone operator. Results from [27] are then used to analyze the EVI. A regularization is obtained via the Yosida approximation of the maximal monotone operator, this approximation is smoothed further to derive optimality conditions for the smoothed optimal control problem.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A new elementary proof for M-stationarity under MPCC-GCQ for mathematical programs with complementarity constraints</title>
      <description><![CDATA[It is known in the literature that local minimizers of mathematical programs with complementarity constraints (MPCCs) are so-called M-stationary points, if a weak MPCC-tailored Guignard constraint qualification (called MPCC-GCQ) holds. In this paper we present a new elementary proof for this result. Our proof is significantly simpler than existing proofs and does not rely on deeper technical theory such as calculus rules for limiting normal cones. A crucial ingredient is a proof of a (to the best of our knowledge previously open) conjecture, which was formulated in a Diploma thesis by Schinabeck.]]></description>
      <pubDate>Fri, 22 Oct 2021 09:09:48 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-6903</link>
      <guid>https://doi.org/10.46298/jnsao-2021-6903</guid>
      <author>Harder, Felix</author>
      <dc:creator>Harder, Felix</dc:creator>
      <content:encoded><![CDATA[It is known in the literature that local minimizers of mathematical programs with complementarity constraints (MPCCs) are so-called M-stationary points, if a weak MPCC-tailored Guignard constraint qualification (called MPCC-GCQ) holds. In this paper we present a new elementary proof for this result. Our proof is significantly simpler than existing proofs and does not rely on deeper technical theory such as calculus rules for limiting normal cones. A crucial ingredient is a proof of a (to the best of our knowledge previously open) conjecture, which was formulated in a Diploma thesis by Schinabeck.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step</title>
      <description><![CDATA[In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in [25], which we denote ICGALP , that allow for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion that facilitates the practical application of the algorithm to computationally intensive settings, e.g., in high (or possibly infinite) dimensional Hilbert spaces commonly found in machine learning problems. The algorithm is able to solve composite minimization problems involving the sum of three convex proper lower-semicontinuous functions subject to an affine constraint of the form Ax = b for some bounded linear operator A. Only one of the functions in the objective is assumed to be differentiable, the other two are assumed to have an accessible proximal operator and a linear minimization oracle. As main results, we show convergence of the Lagrangian values (so-called convergence in the Bregman sense) and asymptotic feasibility of the affine constraint as well as strong convergence of the sequence of dual variables to a solution of the dual problem, in an almost sure sense. Almost sure convergence rates are given for the Lagrangian values and the feasibility gap for the ergodic primal variables. Rates in expectation are given for the Lagrangian values and the feasibility gap subsequentially in the pointwise sense. Numerical experiments verifying the predicted rates of convergence are shown as well.]]></description>
      <pubDate>Wed, 01 Sep 2021 15:19:28 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-6480</link>
      <guid>https://doi.org/10.46298/jnsao-2021-6480</guid>
      <author>Silveti-Falls, Antonio</author>
      <author>Molinari, Cesare</author>
      <author>Fadili, Jalal</author>
      <dc:creator>Silveti-Falls, Antonio</dc:creator>
      <dc:creator>Molinari, Cesare</dc:creator>
      <dc:creator>Fadili, Jalal</dc:creator>
      <content:encoded><![CDATA[In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in [25], which we denote ICGALP , that allow for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion that facilitates the practical application of the algorithm to computationally intensive settings, e.g., in high (or possibly infinite) dimensional Hilbert spaces commonly found in machine learning problems. The algorithm is able to solve composite minimization problems involving the sum of three convex proper lower-semicontinuous functions subject to an affine constraint of the form Ax = b for some bounded linear operator A. Only one of the functions in the objective is assumed to be differentiable, the other two are assumed to have an accessible proximal operator and a linear minimization oracle. As main results, we show convergence of the Lagrangian values (so-called convergence in the Bregman sense) and asymptotic feasibility of the affine constraint as well as strong convergence of the sequence of dual variables to a solution of the dual problem, in an almost sure sense. Almost sure convergence rates are given for the Lagrangian values and the feasibility gap for the ergodic primal variables. Rates in expectation are given for the Lagrangian values and the feasibility gap subsequentially in the pointwise sense. Numerical experiments verifying the predicted rates of convergence are shown as well.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On implicit variables in optimization theory</title>
      <description><![CDATA[Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory.]]></description>
      <pubDate>Fri, 06 Aug 2021 06:13:44 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-7215</link>
      <guid>https://doi.org/10.46298/jnsao-2021-7215</guid>
      <author>Benko, Matúš</author>
      <author>Mehlitz, Patrick</author>
      <dc:creator>Benko, Matúš</dc:creator>
      <dc:creator>Mehlitz, Patrick</dc:creator>
      <content:encoded><![CDATA[Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On inner calmness*, generalized calculus, and derivatives of the normal cone mapping</title>
      <description><![CDATA[In this paper, we study continuity and Lipschitzian properties of set-valued mappings, focusing on inner-type conditions. We introduce new notions of inner calmness* and, its relaxation, fuzzy inner calmness*. We show that polyhedral maps enjoy inner calmness* and examine (fuzzy) inner calmness* of a multiplier mapping associated with constraint systems in depth. Then we utilize these notions to develop some new rules of generalized differential calculus, mainly for the primal objects (e.g. tangent cones). In particular, we propose an exact chain rule for graphical derivatives. We apply these results to compute the derivatives of the normal cone mapping, essential e.g. for sensitivity analysis of variational inequalities.]]></description>
      <pubDate>Sat, 26 Jun 2021 07:42:59 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-5881</link>
      <guid>https://doi.org/10.46298/jnsao-2021-5881</guid>
      <author>Benko, Matúš</author>
      <dc:creator>Benko, Matúš</dc:creator>
      <content:encoded><![CDATA[In this paper, we study continuity and Lipschitzian properties of set-valued mappings, focusing on inner-type conditions. We introduce new notions of inner calmness* and, its relaxation, fuzzy inner calmness*. We show that polyhedral maps enjoy inner calmness* and examine (fuzzy) inner calmness* of a multiplier mapping associated with constraint systems in depth. Then we utilize these notions to develop some new rules of generalized differential calculus, mainly for the primal objects (e.g. tangent cones). In particular, we propose an exact chain rule for graphical derivatives. We apply these results to compute the derivatives of the normal cone mapping, essential e.g. for sensitivity analysis of variational inequalities.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Uniform Regularity of Set-Valued Mappings and Stability of Implicit Multifunctions</title>
      <description><![CDATA[We propose a unifying general (i.e. not assuming the mapping to have any particular structure) view on the theory of regularity and clarify the relationships between the existing primal and dual quantitative sufficient and necessary conditions including their hierarchy. We expose the typical sequence of regularity assertions, often hidden in the proofs, and the roles of the assumptions involved in the assertions, in particular, on the underlying space: general metric, normed, Banach or Asplund. As a consequence, we formulate primal and dual conditions for the stability properties of solution mappings to inclusions]]></description>
      <pubDate>Tue, 22 Jun 2021 07:41:39 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-6599</link>
      <guid>https://doi.org/10.46298/jnsao-2021-6599</guid>
      <author>Cuong, Nguyen Duy</author>
      <author>Kruger, Alexander Y.</author>
      <dc:creator>Cuong, Nguyen Duy</dc:creator>
      <dc:creator>Kruger, Alexander Y.</dc:creator>
      <content:encoded><![CDATA[We propose a unifying general (i.e. not assuming the mapping to have any particular structure) view on the theory of regularity and clarify the relationships between the existing primal and dual quantitative sufficient and necessary conditions including their hierarchy. We expose the typical sequence of regularity assertions, often hidden in the proofs, and the roles of the assumptions involved in the assertions, in particular, on the underlying space: general metric, normed, Banach or Asplund. As a consequence, we formulate primal and dual conditions for the stability properties of solution mappings to inclusions]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Relations between Abs-Normal NLPs and MPCCs. Part 2: Weak Constraint Qualifications</title>
      <description><![CDATA[This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie's and Guignard's kink qualification and prove relations to MPCC-ACQ and MPCC-GCQ for the counterpart MPCC formulations. Due to non-uniqueness of a specific slack reformulation suggested in [10], the relations are non-trivial. It turns out that constraint qualifications of Abadie type are preserved. We also prove the weaker result that equivalence of Guginard's (and Abadie's) constraint qualifications for all branch problems hold, while the question of GCQ preservation remains open. Finally, we introduce M-stationarity and B-stationarity concepts for abs-normal NLPs and prove first order optimality conditions corresponding to MPCC counterpart formulations.]]></description>
      <pubDate>Thu, 18 Feb 2021 09:18:48 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-6673</link>
      <guid>https://doi.org/10.46298/jnsao-2021-6673</guid>
      <author>Hegerhorst-Schultchen, Lisa C.</author>
      <author>Kirches, Christian</author>
      <author>Steinbach, Marc C.</author>
      <dc:creator>Hegerhorst-Schultchen, Lisa C.</dc:creator>
      <dc:creator>Kirches, Christian</dc:creator>
      <dc:creator>Steinbach, Marc C.</dc:creator>
      <content:encoded><![CDATA[This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie's and Guignard's kink qualification and prove relations to MPCC-ACQ and MPCC-GCQ for the counterpart MPCC formulations. Due to non-uniqueness of a specific slack reformulation suggested in [10], the relations are non-trivial. It turns out that constraint qualifications of Abadie type are preserved. We also prove the weaker result that equivalence of Guginard's (and Abadie's) constraint qualifications for all branch problems hold, while the question of GCQ preservation remains open. Finally, we introduce M-stationarity and B-stationarity concepts for abs-normal NLPs and prove first order optimality conditions corresponding to MPCC counterpart formulations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Relations between Abs-Normal NLPs and MPCCs. Part 1: Strong Constraint Qualifications</title>
      <description><![CDATA[This work is part of an ongoing effort of comparing non-smooth optimization problems in abs-normal form to MPCCs. We study the general abs-normal NLP with equality and inequality constraints in relation to an equivalent MPCC reformulation. We show that kink qualifications and MPCC constraint qualifications of linear independence type and Mangasarian-Fromovitz type are equivalent. Then we consider strong stationarity concepts with first and second order optimality conditions, which again turn out to be equivalent for the two problem classes. Throughout we also consider specific slack reformulations suggested in [9], which preserve constraint qualifications of linear independence type but not of Mangasarian-Fromovitz type.]]></description>
      <pubDate>Thu, 18 Feb 2021 09:17:57 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2021-6672</link>
      <guid>https://doi.org/10.46298/jnsao-2021-6672</guid>
      <author>Hegerhorst-Schultchen, Lisa C.</author>
      <author>Kirches, Christian</author>
      <author>Steinbach, Marc C.</author>
      <dc:creator>Hegerhorst-Schultchen, Lisa C.</dc:creator>
      <dc:creator>Kirches, Christian</dc:creator>
      <dc:creator>Steinbach, Marc C.</dc:creator>
      <content:encoded><![CDATA[This work is part of an ongoing effort of comparing non-smooth optimization problems in abs-normal form to MPCCs. We study the general abs-normal NLP with equality and inequality constraints in relation to an equivalent MPCC reformulation. We show that kink qualifications and MPCC constraint qualifications of linear independence type and Mangasarian-Fromovitz type are equivalent. Then we consider strong stationarity concepts with first and second order optimality conditions, which again turn out to be equivalent for the two problem classes. Throughout we also consider specific slack reformulations suggested in [9], which preserve constraint qualifications of linear independence type but not of Mangasarian-Fromovitz type.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>First-order differentiability properties of a class of equality constrained optimal value functions with applications</title>
      <description><![CDATA[In this paper we study the right differentiability of a parametric infimum function over a parametric set defined by equality constraints. We present a new theorem with sufficient conditions for the right differentiability with respect to the parameter. Target applications are nonconvex objective functions with equality constraints arising in optimal control and shape optimisation. The theorem makes use of the averaged adjoint approach in conjunction with the variational approach of Kunisch, Ito and Peichl. We provide two examples of our abstract result: (a) a shape optimisation problem involving a semilinear partial differential equation which exhibits infinitely many solutions, (b) a finite dimensional quadratic function subject to a nonlinear equation.]]></description>
      <pubDate>Thu, 17 Dec 2020 12:19:35 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2020-6034</link>
      <guid>https://doi.org/10.46298/jnsao-2020-6034</guid>
      <author>Sturm, Kevin</author>
      <dc:creator>Sturm, Kevin</dc:creator>
      <content:encoded><![CDATA[In this paper we study the right differentiability of a parametric infimum function over a parametric set defined by equality constraints. We present a new theorem with sufficient conditions for the right differentiability with respect to the parameter. Target applications are nonconvex objective functions with equality constraints arising in optimal control and shape optimisation. The theorem makes use of the averaged adjoint approach in conjunction with the variational approach of Kunisch, Ito and Peichl. We provide two examples of our abstract result: (a) a shape optimisation problem involving a semilinear partial differential equation which exhibits infinitely many solutions, (b) a finite dimensional quadratic function subject to a nonlinear equation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Asymptotic stationarity and regularity for nonsmooth optimization problems</title>
      <description><![CDATA[Based on the tools of limiting variational analysis, we derive a sequential necessary optimality condition for nonsmooth mathematical programs which holds without any additional assumptions. In order to ensure that stationary points in this new sense are already Mordukhovich-stationary, the presence of a constraint qualification which we call AM-regularity is necessary. We investigate the relationship between AM-regularity and other constraint qualifications from nonsmooth optimization like metric (sub-)regularity of the underlying feasibility mapping. Our findings are applied to optimization problems with geometric and, particularly, disjunctive constraints. This way, it is shown that AM-regularity recovers recently introduced cone-continuity-type constraint qualifications, sometimes referred to as AKKT-regularity, from standard nonlinear and complementarity-constrained optimization. Finally, we discuss some consequences of AM-regularity for the limiting variational calculus.]]></description>
      <pubDate>Tue, 15 Dec 2020 09:04:02 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2020-6575</link>
      <guid>https://doi.org/10.46298/jnsao-2020-6575</guid>
      <author>Mehlitz, Patrick</author>
      <dc:creator>Mehlitz, Patrick</dc:creator>
      <content:encoded><![CDATA[Based on the tools of limiting variational analysis, we derive a sequential necessary optimality condition for nonsmooth mathematical programs which holds without any additional assumptions. In order to ensure that stationary points in this new sense are already Mordukhovich-stationary, the presence of a constraint qualification which we call AM-regularity is necessary. We investigate the relationship between AM-regularity and other constraint qualifications from nonsmooth optimization like metric (sub-)regularity of the underlying feasibility mapping. Our findings are applied to optimization problems with geometric and, particularly, disjunctive constraints. This way, it is shown that AM-regularity recovers recently introduced cone-continuity-type constraint qualifications, sometimes referred to as AKKT-regularity, from standard nonlinear and complementarity-constrained optimization. Finally, we discuss some consequences of AM-regularity for the limiting variational calculus.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Constructing a subgradient from directional derivatives for functions of two variables</title>
      <description><![CDATA[For any scalar-valued bivariate function that is locally Lipschitz continuous and directionally differentiable, it is shown that a subgradient may always be constructed from the function's directional derivatives in the four compass directions, arranged in a so-called "compass difference". When the original function is nonconvex, the obtained subgradient is an element of Clarke's generalized gradient, but the result appears to be novel even for convex functions. The function is not required to be represented in any particular form, and no further assumptions are required, though the result is strengthened when the function is additionally L-smooth in the sense of Nesterov. For certain optimal-value functions and certain parametric solutions of differential equation systems, these new results appear to provide the only known way to compute a subgradient. These results also imply that centered finite differences will converge to a subgradient for bivariate nonsmooth functions. As a dual result, we find that any compact convex set in two dimensions contains the midpoint of its interval hull. Examples are included for illustration, and it is demonstrated that these results do not extend directly to functions of more than two variables or sets in higher dimensions.]]></description>
      <pubDate>Fri, 12 Jun 2020 08:10:09 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2020-6061</link>
      <guid>https://doi.org/10.46298/jnsao-2020-6061</guid>
      <author>Khan, Kamil A.</author>
      <author>Yuan, Yingwei</author>
      <dc:creator>Khan, Kamil A.</dc:creator>
      <dc:creator>Yuan, Yingwei</dc:creator>
      <content:encoded><![CDATA[For any scalar-valued bivariate function that is locally Lipschitz continuous and directionally differentiable, it is shown that a subgradient may always be constructed from the function's directional derivatives in the four compass directions, arranged in a so-called "compass difference". When the original function is nonconvex, the obtained subgradient is an element of Clarke's generalized gradient, but the result appears to be novel even for convex functions. The function is not required to be represented in any particular form, and no further assumptions are required, though the result is strengthened when the function is additionally L-smooth in the sense of Nesterov. For certain optimal-value functions and certain parametric solutions of differential equation systems, these new results appear to provide the only known way to compute a subgradient. These results also imply that centered finite differences will converge to a subgradient for bivariate nonsmooth functions. As a dual result, we find that any compact convex set in two dimensions contains the midpoint of its interval hull. Examples are included for illustration, and it is demonstrated that these results do not extend directly to functions of more than two variables or sets in higher dimensions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Optimal control of an abstract evolution variational inequality with application to homogenized plasticity</title>
      <description><![CDATA[The paper is concerned with an optimal control problem governed by a state equation in form of a generalized abstract operator differential equation involving a maximal monotone operator. The state equation is uniquely solvable, but the associated solution operator is in general not G\^ateaux-differentiable. In order to derive optimality conditions, we therefore regularize the state equation and its solution operator, respectively, by means of a (smoothed) Yosida approximation. We show convergence of global minimizers for regularization parameter tending to zero and derive necessary and sufficient optimality conditions for the regularized problems. The paper ends with an application of the abstract theory to optimal control of homogenized quasi-static elastoplasticity.]]></description>
      <pubDate>Tue, 12 May 2020 21:02:25 +0000</pubDate>
      <link>https://doi.org/10.46298/jnsao-2020-5800</link>
      <guid>https://doi.org/10.46298/jnsao-2020-5800</guid>
      <author>Meinlschmidt, Hannes</author>
      <author>Meyer, Christian</author>
      <author>Walther, Stephan</author>
      <dc:creator>Meinlschmidt, Hannes</dc:creator>
      <dc:creator>Meyer, Christian</dc:creator>
      <dc:creator>Walther, Stephan</dc:creator>
      <content:encoded><![CDATA[The paper is concerned with an optimal control problem governed by a state equation in form of a generalized abstract operator differential equation involving a maximal monotone operator. The state equation is uniquely solvable, but the associated solution operator is in general not G\^ateaux-differentiable. In order to derive optimality conditions, we therefore regularize the state equation and its solution operator, respectively, by means of a (smoothed) Yosida approximation. We show convergence of global minimizers for regularization parameter tending to zero and derive necessary and sufficient optimality conditions for the regularized problems. The paper ends with an application of the abstract theory to optimal control of homogenized quasi-static elastoplasticity.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
