Optimization considerations for regularizations of inverse and learning problems
Most inverse problems, and many learning tasks, are addressed through
variational formulations. In this framework, prior knowledge on the solutions
are conveniently enforced by specifically designed regularization terms. Most
efficient regularizers are nonsmooth, and _proximal splitting algorithms_ remain
today the most appropriate minimization methods to tackle this kind of problems.
I try to make a panorama of which class of method should be used in practice,
depending on the structure and properties of the composite objective functional
at hand; advertising in the process the _generalized forward-backward_ splitting
I devised some years ago. Then, I briefly discuss some variants, mainly for
In a second part, I will focus on the _graph total variation_ regularization. I
present an extension of the _cut-pursuit algorithm_ of Landrieu and Obozinski
(2017), which is a working-set approach combining continuous and combinatorial
optimization. Now able to tackle nondifferential parts beside the total
variation in the objective functional, the results are so promising that it
might revitalize the use of the total variation in large-scale settings.
Landrieu, L. & Obozinski, G. Cut Pursuit: Fast Algorithms to Learn Piecewise
Constant Functions on General Weighted Graphs SIAM Journal on Imaging Sciences,
2017, 10, 1724–1766.