publications
2025
- Nonlinearly preconditioned gradient flowsKonstantinos Oikonomidis, Alexander Bodard, Jan Quan, and Panagiotis PatrinosarXiv preprint arXiv:2511.20370, 2025
@article{oikonomidis2025nonlinearly, title = {Nonlinearly preconditioned gradient flows}, author = {Oikonomidis, Konstantinos and Bodard, Alexander and Quan, Jan and Patrinos, Panagiotis}, journal = {arXiv preprint arXiv:2511.20370}, year = {2025}, } - Scaled relative graphs for pairs of operators beyond classical monotonicityJan Quan, Alexander Bodard, Konstantinos Oikonomidis, and Panagiotis PatrinosarXiv preprint arXiv:2511.20209, 2025
@article{quan2025scaled, title = {Scaled relative graphs for pairs of operators beyond classical monotonicity}, author = {Quan, Jan and Bodard, Alexander and Oikonomidis, Konstantinos and Patrinos, Panagiotis}, journal = {arXiv preprint arXiv:2511.20209}, year = {2025}, } - The inexact power augmented Lagrangian method for constrained nonconvex optimizationTransactions on Machine Learning Research, Nov 2025
This work introduces an unconventional inexact augmented Lagrangian method, where the augmenting term is a Euclidean norm raised to a power between one and two. The proposed algorithm is applicable to a broad class of constrained nonconvex minimization problems, that involve nonlinear equality constraints over a convex set under a mild regularity condition. First, we conduct a full complexity analysis of the method, leveraging an accelerated first-order algorithm for solving the Ho¨lder-smooth subproblems. Next, we present an inexact proximal point method to tackle these subproblems, demonstrating that it achieves an improved convergence rate. Notably, this rate reduces to the best-known convergence rate for first-order methods when the augmenting term is a squared Euclidean norm. Our worst-case complexity results further show that using lower powers for the augmenting term leads to faster constraint satisfaction, albeit with a slower decrease in the dual residual. Numerical experiments support our theoretical findings, illustrating that this trade-off between constraint satisfaction and cost minimization is advantageous for certain practical problems.
@article{bodard2025inexact, title = {The inexact power augmented Lagrangian method for constrained nonconvex optimization}, author = {Bodard, Alexander and Oikonomidis, Konstantinos and Laude, Emanuel and Patrinos, Panagiotis}, journal = {Transactions on Machine Learning Research}, issn = {2835-8856}, month = nov, year = {2025}, url = {https://openreview.net/forum?id=63ANb4r7EM}, note = {}, language = {en}, } - Escaping saddle points without Lipschitz smoothness: the power of nonlinear preconditioningAlexander Bodard and Panagiotis PatrinosSep 2025Accepted for NeurIPS 2025, spotlight
We study generalized smoothness in nonconvex optimization, focusing on \(L_0, L_1)\-smoothness and anisotropic smoothness. The former was empirically derived from practical neural network training examples, while the latter arises naturally in the analysis of nonlinearly preconditioned gradient methods. We introduce a new sufficient condition that encompasses both notions, reveals their close connection, and holds in key applications such as phase retrieval and matrix factorization. Leveraging tools from dynamical systems theory, we then show that nonlinear preconditioning – including gradient clipping – preserves the saddle point avoidance property of classical gradient descent. Crucially, the assumptions required for this analysis are actually satisfied in these applications, unlike in classical results that rely on restrictive Lipschitz smoothness conditions. We further analyze a perturbed variant that efficiently attains second-order stationarity with only logarithmic dependence on dimension, matching similar guarantees of classical gradient methods.
@misc{bodard_escaping_2025, title = {Escaping saddle points without {Lipschitz} smoothness: the power of nonlinear preconditioning}, shorttitle = {Escaping saddle points without {Lipschitz} smoothness}, url = {http://arxiv.org/abs/2509.15817}, doi = {10.48550/arXiv.2509.15817}, urldate = {2025-09-24}, publisher = {arXiv}, author = {Bodard, Alexander and Patrinos, Panagiotis}, month = sep, year = {2025}, note = {Accepted for NeurIPS 2025, spotlight}, keywords = {Mathematics - Optimization and Control}, } - Global convergence analysis of the power proximal point and augmented Lagrangian methodComputational Optimization and Applications, Sep 2025
In this paper we study an unconventional inexact Augmented Lagrangian Method (ALM) for convex optimization problems, as first proposed by Bertsekas, wherein the penalty term is a potentially non-Euclidean norm raised to a power between one and two. We analyze the algorithm through the lens of a nonlinear Proximal Point Method (PPM), as originally introduced by Luque, applied to the dual problem. While Luque analyzes the order of local convergence of the scheme with Euclidean norms our focus is on the non-Euclidean case which prevents us from using standard tools for the analysis such as the nonexpansiveness of the proximal mapping. To allow for errors in the primal update, we derive two implementable stopping criteria under which we analyze both the global and the local convergence rates of the algorithm. More specifically, we show that the method enjoys a fast sublinear global rate in general and a local superlinear rate under suitable growth assumptions. We also highlight that the power ALM can be interpreted as classical ALM with an implicitly defined penalty-parameter schedule, reducing its parameter dependence. Our experiments on a number of relevant problems suggest that for certain powers the method performs similarly to a classical ALM with fine-tuned adaptive penalty rule, despite involving fewer parameters.
@article{oikonomidis_global_2025, title = {Global convergence analysis of the power proximal point and augmented {Lagrangian} method}, issn = {1573-2894}, url = {https://doi.org/10.1007/s10589-025-00730-8}, doi = {10.1007/s10589-025-00730-8}, language = {en}, urldate = {2025-09-24}, journal = {Computational Optimization and Applications}, author = {Oikonomidis, Konstantinos A. and Bodard, Alexander and Laude, Emanuel and Patrinos, Panagiotis}, month = sep, year = {2025}, keywords = {Augmented Lagrangian, Proximal point, Duality}, } - Second-order methods for provably escaping strict saddle points in composite nonconvex and nonsmooth optimizationAlexander Bodard, Masoud Ahookhosh, and Panagiotis PatrinosJun 2025arXiv:2506.22332 [math]
This study introduces two second-order methods designed to provably avoid saddle points in composite nonconvex optimization problems: (i) a nonsmooth trust-region method and (ii) a curvilinear linesearch method. These developments are grounded in the forward-backward envelope (FBE), for which we analyze the local second-order differentiability around critical points and establish a novel equivalence between its second-order stationary points and those of the original objective. We show that the proposed algorithms converge to second-order stationary points of the FBE under a mild local smoothness condition on the proximal mapping of the nonsmooth term. Notably, for }( }C^2 })-partly smooth functions, this condition holds under a standard strict complementarity assumption. To the best of our knowledge, these are the first second-order algorithms that provably escape nonsmooth strict saddle points of composite nonconvex optimization, regardless of the initialization. Our preliminary numerical experiments show promising performance of the developed methods, validating our theoretical foundations.
@misc{bodard_second-order_2025, title = {Second-order methods for provably escaping strict saddle points in composite nonconvex and nonsmooth optimization}, url = {http://arxiv.org/abs/2506.22332}, doi = {10.48550/arXiv.2506.22332}, urldate = {2025-07-08}, publisher = {arXiv}, author = {Bodard, Alexander and Ahookhosh, Masoud and Patrinos, Panagiotis}, month = jun, year = {2025}, note = {arXiv:2506.22332 [math]}, keywords = {Mathematics - Optimization and Control}, } - A localized consensus-based sampling algorithmArne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, and Giovanni SamaeyMay 2025arXiv:2505.24861 [math]
We develop a novel interacting-particle method for sampling from non-Gaussian distributions. As a first step, we propose a new way to derive the consensus-based sampling (CBS) algorithm, starting from ensemble-preconditioned Langevin diffusions. We approximate the target potential by its Moreau envelope, such that the gradient in the Langevin equation can be replaced by a proximal operator. We then approximate the proximal operator by a weighted mean, and finally assume that the initial and target distributions are Gaussian, resulting in the CBS dynamics. If we keep only those approximations that can be justified in the non-Gaussian setting, the result is a new interacting-particle method for sampling, which we call localized consensus-based sampling. We prove that our algorithm is affine-invariant and exact for Gaussian distributions in the mean-field setting. Numerical tests illustrate that localized CBS compares favorably to alternative methods in terms of affine-invariance and performance on non-Gaussian distributions.
@misc{bouillon_localized_2025, title = {A localized consensus-based sampling algorithm}, url = {http://arxiv.org/abs/2505.24861}, doi = {10.48550/arXiv.2505.24861}, language = {en}, urldate = {2025-06-03}, publisher = {arXiv}, author = {Bouillon, Arne and Bodard, Alexander and Patrinos, Panagiotis and Nuyens, Dirk and Samaey, Giovanni}, month = may, year = {2025}, note = {arXiv:2505.24861 [math]}, keywords = {Computer Science - Numerical Analysis, Mathematics - Numerical Analysis, Mathematics - Optimization and Control}, }
2024
- EM++: A parameter learning framework for stochastic switching systemsJul 2024arXiv:2407.16359 [cs, eess, math]
This paper proposes a general switching dynamical system model, and a custom majorization-minimization-based algorithm EM++ for identifying its parameters. For certain families of distributions, such as Gaussian distributions, this algorithm reduces to the well-known expectation-maximization method. We prove global convergence of the algorithm under suitable assumptions, thus addressing an important open issue in the switching system identification literature. The effectiveness of both the proposed model and algorithm is validated through extensive numerical experiments.
@misc{wang_em_2024, title = {{EM}++: {A} parameter learning framework for stochastic switching systems}, shorttitle = {{EM}++}, url = {http://arxiv.org/abs/2407.16359}, language = {en}, urldate = {2024-08-23}, publisher = {arXiv}, author = {Wang, Renzi and Bodard, Alexander and Schuurmans, Mathijs and Patrinos, Panagiotis}, month = jul, year = {2024}, note = {arXiv:2407.16359 [cs, eess, math]}, keywords = {Mathematics - Optimization and Control, Electrical Engineering and Systems Science - Systems and Control}, }
2023
- PANTR: A Proximal Algorithm With Trust-Region Updates for Nonconvex Constrained OptimizationAlexander Bodard, Pieter Pas, and Panagiotis PatrinosIEEE Control Systems Letters, Jul 2023
This letter presents PANTR, an efficient solver for nonconvex constrained optimization problems, that is well-suited as an inner solver for an augmented Lagrangian method. The proposed scheme combines forward-backward iterations with solutions to trust-region subproblems: the former ensures global convergence, whereas the latter enables fast update directions. We discuss how the algorithm is able to exploit exact Hessian information of the smooth objective term through a linear Newton approximation, while benefiting from the structure of box-constraints or }ell ₁ -regularization. An open-source C++ implementation of PANTR is made available as part of the NLP solver library ALPAQA. Finally, the effectiveness of the proposed method is demonstrated in nonlinear model predictive control applications.
@article{bodard_pantr_2023, title = {{PANTR}: {A} {Proximal} {Algorithm} {With} {Trust}-{Region} {Updates} for {Nonconvex} {Constrained} {Optimization}}, volume = {7}, issn = {2475-1456}, shorttitle = {{PANTR}}, doi = {10.1109/LCSYS.2023.3286331}, journal = {IEEE Control Systems Letters}, author = {Bodard, Alexander and Pas, Pieter and Patrinos, Panagiotis}, year = {2023}, keywords = {Optimization, Convergence, Prediction algorithms, Predictive control, Approximation algorithms, Minimization, Optimization algorithms, Reliability, numerical algorithms, predictive control for nonlinear systems}, pages = {2389--2394}, } - The “Eagle” Approach To Train Electrical Engineers With Collaborative Problem-Solving SkillsFereshteh Poormohammadi, Merijn Van Deyck, Martijn Deckers, Abdul Saboor, Bowen Wang, and 11 more authorsArrow@TU Dublin, Jul 2023
Engineering education plays a critical role in addressing the ever-increasing environmental and societal challenges, and collaborative problem solving (CPS) is a vital skill for engineers to tackle such complex multidisciplinary challenges and develop high-quality solutions. The EAGLE project at KU Leuven exemplifies CPS implementation in electrical engineering education, providing students with real-world connections and deep learning opportunities to develop teamwork, problem-solving, and negotiation skills.
@article{poormohammadi_eagle_2023, title = {The “{Eagle}” {Approach} {To} {Train} {Electrical} {Engineers} {With} {Collaborative} {Problem}-{Solving} {Skills}}, url = {https://arrow.tudublin.ie/sefi2023_prapap/143/}, doi = {10.21427/3NC0-AB72}, language = {en}, urldate = {2024-03-03}, journal = {Arrow@TU Dublin}, author = {Poormohammadi, Fereshteh and Van Deyck, Merijn and Deckers, Martijn and Saboor, Abdul and Wang, Bowen and Mehrjouseresht, Pouya and Zhang, Zhenda and Symons, Arne and Pas, Pieter and Bodard, Alexander and Van Rooij, Hans and Verhelst, Marian and Bertrand, Alexander and Sabariego, Ruth and Patrinos, Panagiotis and Coppens, Peter}, year = {2023}, } - SPOCK: A proximal method for multistage risk-averse optimal control problemsIFAC-PapersOnLine, Jan 2023
Risk-averse optimal control problems have gained a lot of attention in the last decade, mostly due to their attractive mathematical properties and practical importance. They can be seen as an interpolation between stochastic and robust optimal control approaches, allowing the designer to trade-off performance for robustness and vice-versa. Due to their stochastic nature, risk-averse problems are of a very large scale, involving millions of decision variables, which poses a challenge in terms of efficient computation. In this work, we propose a splitting for general risk-averse problems involving linear dynamical systems and show how to efficiently compute iterates on a GPU-enabled hardware. Moreover, we propose Spock — a new algorithm that utilizes the proposed splitting and takes advantage of the SuperMann scheme combined with fast directions from Anderson’s acceleration method for enhanced convergence speed. We implement Spock in Julia as an open-source solver, which is amenable to warm-starting and massive parallelization.
@article{bodard_spock_2023, series = {22nd {IFAC} {World} {Congress}}, title = {{SPOCK}: {A} proximal method for multistage risk-averse optimal control problems}, volume = {56}, issn = {2405-8963}, shorttitle = {{SPOCK}}, url = {https://www.sciencedirect.com/science/article/pii/S2405896323014891}, doi = {10.1016/j.ifacol.2023.10.1086}, number = {2}, urldate = {2024-09-21}, journal = {IFAC-PapersOnLine}, author = {Bodard, Alexander and Moran, Ruairi and Schuurmans, Mathijs and Patrinos, Panagiotis and Sopasakis, Pantelis}, month = jan, year = {2023}, keywords = {large-scale optimization problems, modeling for control optimization, risk-averse optimal control, Stochastic optimal control problems}, pages = {1944--1951}, }