In machine learning and other large scale applications, nowadays deterministic and stochastic variants of the steepest descent method are widely used for the minimization of objectives that are only piecewise smooth. As alternative, in this talk we present a deterministic descent method based on the generalization of rescaled conjugate gradients proposed by Phil Wolfe in 1975 for objectives that are convex. Without this assumption the new method exploits semismoothness to obtain conjugate pairs of generalized gradients such that it can only converge to Clarke stationary points. In addition to the theoretical analysis, we present preliminary numerical results.
How to join online
The talk is held online via Zoom. You can join with the following link:
Referent: Prof. Andrea Walther, Department of Mathematics, Humboldt University Berlin
Zeit: 12:00 Uhr
Ort: Hybrid (Room 32-349 and via Zoom)