publications

publications policy — I do my best to maintain updated versions with possible typo corrections and clarifications on arxiv (both are generally marked in bold and red for easy reference). Therefore, please favor the arxiv versions to the official published ones.

thesis — my thesis (under the supervision of François Glineur and Julien Hendrickx) had the chance to be awarded the ICTEAM thesis award for 2018, the IBM-FNRS innovation award for 2018, and to be a finalist for the AW Tucker prize for 2018. In addition, we received the 2017 best paper award in Optimization Letters, for a joint work with Etienne de Klerk and François Glineur (for this paper).

codes — see my github profile for all my codes. The current version of the Performance EStimation TOolbox (PESTO) is available from here (user manual, conference proceeding). The numerical worst-case analyses from PEP can now be performed just by writting the algorithms just as you would implement them in Matlab. The new PEPit (performance estimation in Python) is available from here (due to the fabulous work of Baptiste Goujaud and Céline Moucer). It is easy to experiment with it using this notebook (see colab).

preprints

  1. preprint
    PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python
    arXiv:2201.04040 2022
  2. preprint
    Optimal first-order methods for convex functions with a quadratic upper bound
    arXiv:2205.15033 2022
  3. preprint
    A systematic approach to Lyapunov analyses of continuous-time models in convex optimization
    arXiv:2205.12772 2022
  4. preprint
    Quadratic minimization: from conjugate gradient to an adaptive Heavy-ball method with Polyak step-sizes
    arXiv:2210.06367 2022
  5. preprint
    Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity
    arXiv:2210.13831 2022

books

  1. book
    Acceleration Methods
    Foundations and Trends in Optimization 2021

journals

  1. journal
    An optimal gradient method for smooth strongly convex minimization
    Mathematical Programming 2022
  2. journal
    On the oracle complexity of smooth strongly convex minimization
    Journal of Complexity 2022
  3. journal
    Principled Analyses and Design of First-Order Methods with Inexact Proximal Operators
    Mathematical Programming (to appear) 2022
  4. journal
    A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives
    Open Journal of Mathematical Optimization 2022
  5. journal
    Convergence of a Constrained Vector Extrapolation Scheme
    SIAM Journal on Mathematics of Data Science 2022
  6. journal
    Optimal complexity and certification of Bregman first-order methods
    Mathematical Programming 2022
  7. journal
    Efficient first-order methods for convex minimization: a constructive approach
    Mathematical Programming 2020
  8. journal
    Operator splitting performance estimation: Tight contraction factors and optimal parameter selection
    SIAM Journal on Optimization 2020
  9. journal
    Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
    SIAM Journal on Optimization 2020
  10. journal
    Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
    Journal of Optimization Theory and Applications 2018
  11. journal
    Exact worst-case performance of first-order methods for composite convex optimization
    SIAM Journal on Optimization 2017
  12. journal
    On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions [Best paper award]
    Optimization Letters 2017
  13. journal
    Smooth strongly convex interpolation and exact worst-case performance of first-order methods
    Mathematical Programming 2017

conferences

  1. conference
    Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities
    In Advances in Neural Information Processing Systems (NeurIPS) 2022
  2. conference
    Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization
    In Advances in Neural Information Processing Systems (NeurIPS) 2022
  3. conference
    PROX-QP: Yet another Quadratic Programming Solver for Robotics and beyond
    In Robotics: Science and systems (RSS 2022) 2022
  4. conference
    Super-Acceleration with Cyclical Step-sizes
    In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022
  5. conference
    A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip [Outstanding paper award]
    In Advances in Neural Information Processing Systems (NeurIPS) 2021
  6. conference
    Complexity guarantees for Polyak steps with momentum
    In Proceedings of the 33rd Conference on Learning Theory (COLT) 2020
  7. conference
    Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions
    In Proceedings of the 32nd Conference on Learning Theory (COLT) 2019
  8. conference
    Lyapunov functions for first-order methods: Tight automated convergence guarantees
    In Proceedings of the 35th International Conference on Machine Learning (ICML) 2018
  9. conference
    Performance estimation toolbox (PESTO): automated worst-case analysis of first-order optimization methods
    In Proceedings of the 56th Conference on Decision and Control (CDC) 2017

PhDtheses

  1. PhDthese
    Convex Interpolation and Performance Estimation of First-order Methods for Convex Optimization [ICTEAM thesis award; IBM-FNRS innovation award; AW Tucker prize finalist]
    Taylor, Adrien B.
    Université catholique de Louvain 2017