SDCA-Powered Inexact Dual Augmented Lagrangian Method for Fast CRF Learning
Résumé
We propose an efficient dual augmented La-grangian formulation to learn conditional random fields (CRF). Our algorithm, which can be interpreted as an inexact gradient descent algorithm on the multiplier, does not require to perform global inference iteratively, and requires only a fixed number of stochastic clique-wise updates at each epoch to obtain a sufficiently good estimate of the gradient w.r.t. the Lagrange multipliers. We prove that the proposed algorithm enjoys global linear convergence for both the primal and the dual objectives. Our experiments show that the proposed algorithm outperforms state-of-the-art baselines in terms of speed of convergence.
Domaines
Machine Learning [stat.ML]
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...