63
Dual Augmented Lagrangian, Proximal Minimization, and MKL Ryota Tomioka 1 , Taiji Suzuki 1 , and Masashi Sugiyama 2 1 University of Tokyo 2 Tokyo Institute of Technology 2009-09-15 @ TU Berlin (UT / Tokyo Tech) DAL 2009-09-15 1 / 42

Dual Augmented Lagrangian, Proximal Minimization, and MKL

Embed Size (px)

Citation preview

Page 1: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

.

.

. ..

.

.

Dual Augmented Lagrangian, Proximal Minimization,and MKL

Ryota Tomioka1, Taiji Suzuki1, and Masashi Sugiyama2

1University of Tokyo2Tokyo Institute of Technology

2009-09-15 @ TU Berlin

(UT / Tokyo Tech) DAL 2009-09-15 1 / 42

Page 2: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 2 / 42

Page 3: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 3 / 42

Page 4: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 4 / 42

Page 5: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Minimum norm reconstruction

w ∈ Rn: unknown.y1, y2, . . . , ym: observations,where yi = ⟨x i , w⟩ + ϵi , ϵi ∼ N (0, 1).Recover w from y .

Underdetermined (when n > m)

minimizew

∥w∥0, s.t. L(w) ≤ C,

where

L(w) =12∥Xw − y∥2,

and ∥w∥0: the number of non-zero elements in w .

Moreover, it is often good to know which features are useful.However, this is NP hard!

(UT / Tokyo Tech) DAL 2009-09-15 5 / 42

Page 6: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Minimum norm reconstruction

w ∈ Rn: unknown.y1, y2, . . . , ym: observations,where yi = ⟨x i , w⟩ + ϵi , ϵi ∼ N (0, 1).Recover w from y .

Underdetermined (when n > m)

minimizew

∥w∥0, s.t. L(w) ≤ C,

where

L(w) =12∥Xw − y∥2,

and ∥w∥0: the number of non-zero elements in w .

Moreover, it is often good to know which features are useful.However, this is NP hard!

(UT / Tokyo Tech) DAL 2009-09-15 5 / 42

Page 7: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Minimum norm reconstruction

w ∈ Rn: unknown.y1, y2, . . . , ym: observations,where yi = ⟨x i , w⟩ + ϵi , ϵi ∼ N (0, 1).Recover w from y .

Underdetermined (when n > m)

minimizew

∥w∥0, s.t. L(w) ≤ C,

where

L(w) =12∥Xw − y∥2,

and ∥w∥0: the number of non-zero elements in w .

Moreover, it is often good to know which features are useful.However, this is NP hard!

(UT / Tokyo Tech) DAL 2009-09-15 5 / 42

Page 8: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Minimum norm reconstruction

w ∈ Rn: unknown.y1, y2, . . . , ym: observations,where yi = ⟨x i , w⟩ + ϵi , ϵi ∼ N (0, 1).Recover w from y .

Underdetermined (when n > m)

minimizew

∥w∥0, s.t. L(w) ≤ C,

where

L(w) =12∥Xw − y∥2,

and ∥w∥0: the number of non-zero elements in w .

Moreover, it is often good to know which features are useful.However, this is NP hard!

(UT / Tokyo Tech) DAL 2009-09-15 5 / 42

Page 9: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Convex relaxation

p-norm like functions

∥w∥pp =

∑nj=1 |wj |p :

{If p ≥ 1 convexIf p < 1 non-convex

−3 −2 −1 0 1 2 30

1

2

3

4

|x|0.01

|x|0.5

|x|x2

∥ · ∥1-regularization is the closest to ∥ · ∥0 within convex norm-likefunctions

(UT / Tokyo Tech) DAL 2009-09-15 6 / 42

Page 10: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Convex relaxation

p-norm like functions

∥w∥pp =

∑nj=1 |wj |p :

{If p ≥ 1 convexIf p < 1 non-convex

−3 −2 −1 0 1 2 30

1

2

3

4

|x|0.01

|x|0.5

|x|x2

∥ · ∥1-regularization is the closest to ∥ · ∥0 within convex norm-likefunctions

(UT / Tokyo Tech) DAL 2009-09-15 6 / 42

Page 11: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Lasso regression

Problem 1:

minimize ∥w∥1, s.t. L(w) ≤ C.

Problem 2:

minimize L(w), s.t. ∥w∥1 ≤ C′.

Problem 3:

minimize L(w) + λ∥w∥1.[From Efron et al. (2003)]

Note:Above three problems are equivalent to each other.Monotone operation preservs equivalence.We focus on the third problem.

(UT / Tokyo Tech) DAL 2009-09-15 7 / 42

Page 12: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Why ℓ1-regularization?

The closest to ∥ · ∥0 within convexnorm-like functions.Non-differentiable at the origin(truncation with finite λ).Non-convex regularizers (p < 1)→ Iteratively solve (weighted)ℓ1-regularization.Bayesian sparse models (type-IIML)→ Iteratively solve (weighted)ℓ1-regularization (in specialcases).(Wipf&Nagarajan, 08)

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

0.5

1

1.5

2

(UT / Tokyo Tech) DAL 2009-09-15 8 / 42

Page 13: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Generalizations

Generalize the loss term· · · e.g.,ℓ1-logistic regression

minimizew∈Rn

∑mi=1 − log P(yi |x i ; w)+λ∥w∥1

whereP(y |x ; w) = σ (y ⟨w , x⟩)(y ∈ {−1, +1}) −5 0 5

0

0.5

1

u

σ (u

)

h

σ(u) = 11+exp(−u)

i

Generalize the reg. term· · · e.g., group lasso (Yuan&Lin,06)

minimizew∈Rn

L(w) + λ∑g∈G

∥wg∥2

where, G is a partition of {1, . . . , n}, w =

(wg1)(wg2)

...(wgq )

,q = |G|.

(UT / Tokyo Tech) DAL 2009-09-15 9 / 42

Page 14: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Introducing Kernels

Multiple Kernel Learning (MKL) (Lanckriet, Bach, et al., 04)Let H1,H2, . . . ,Hn be RKHSs and K1, K2, . . . , Kn be the kernelfunctions. Use functions f = f1︸︷︷︸

∈H1

+ f2︸︷︷︸∈H2

+ · · · + fn︸︷︷︸∈Hn

minimizefj∈Hj ,b∈R

L(f1 + f2 + · · · + fn + b) + λ

n∑j=1

∥fj∥Hj

⇓ representer theorem

minimizeαj∈Rm,b∈R

fℓ( n∑

j=1

K jαj + b1)

+ λn∑

j=1

∥αj∥K j

where, ∥αj∥K j =√

αj⊤K jαj.

· · · nothing but a kernel-weighted group lasso

(UT / Tokyo Tech) DAL 2009-09-15 10 / 42

Page 15: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Introducing Kernels

Multiple Kernel Learning (MKL) (Lanckriet, Bach, et al., 04)Let H1,H2, . . . ,Hn be RKHSs and K1, K2, . . . , Kn be the kernelfunctions. Use functions f = f1︸︷︷︸

∈H1

+ f2︸︷︷︸∈H2

+ · · · + fn︸︷︷︸∈Hn

minimizefj∈Hj ,b∈R

L(f1 + f2 + · · · + fn + b) + λ

n∑j=1

∥fj∥Hj

⇓ representer theorem

minimizeαj∈Rm,b∈R

fℓ( n∑

j=1

K jαj + b1)

+ λn∑

j=1

∥αj∥K j

where, ∥αj∥K j =√

αj⊤K jαj.

· · · nothing but a kernel-weighted group lasso

(UT / Tokyo Tech) DAL 2009-09-15 10 / 42

Page 16: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Lasso, group lasso and MKL

Modeling assumptions

In many cases the loss term L(·) can be decomposed into a lossfunction fℓ and a design matrix A.

Squared loss

f Qℓ (z) =

12∥y − z∥2, A =

(x1

...xm

)

⇒ f Qℓ (Aw) =

12

m∑i=1

(yi − ⟨x i , w⟩)2

Logistic loss

f Lℓ (z) =

m∑i=1

log(1 + exp(−yizi)), A =

(x1

...xm

)

⇒ f Lℓ (Aw) =

m∑i=1

− log σ(yi ⟨w , x i⟩)

(UT / Tokyo Tech) DAL 2009-09-15 11 / 42

Page 17: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 12 / 42

Page 18: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Objective

Develop an optimization algorithm for the problem:

minimizew∈Rn

fℓ(Aw) + φλ(w).

A ∈ Rm×n (m: #observations, n: #unknowns).fℓ is convex and twice differentiable.φλ(w) is convex but possibly non-differentiable, e.g.,φλ(w) = λ∥w∥1.ηφλ = φηλ.We are interested in algorithms for general fℓ (↔ LARS).

(UT / Tokyo Tech) DAL 2009-09-15 13 / 42

Page 19: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Where does the difficulty come from?

Conventional view: the non-differentiability of φλ(w)

Upper bound the regularizer from abovewith a differentiable function.

FOCUSS(Rao & Kreutz-Delgado, 99)Majorization-Minimization (Figueiredo etal., 07)

Explicitly handle the non-differentiability.Sub-gradient L-BFGS (Andrew & Gao,07; Yu et al., 08)

Our view: the coupling between variables introduced by A.

(UT / Tokyo Tech) DAL 2009-09-15 14 / 42

Page 20: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Where does the difficulty come from?

Our view: the coupling between variables introduced by A.

In fact, when A = In

minw∈Rn

(12∥y − w∥2

2 + λ∥w∥1

)=

n∑j=1

minwj∈R

(12(yj − wj)

2 + λ|wj |)

.

⇒ w∗j = STλ(yj)

=

yj − λ (λ ≤ yj),

0 (−λ ≤ yj ≤ λ),

yj + λ (yj ≤ −λ).

min is obtained analytically!

λ−λ

We focus on φλ for which the above min can be obtained analytically(UT / Tokyo Tech) DAL 2009-09-15 15 / 42

Page 21: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Earlier study

Iterative Shrinkage/Thresholding (IST) (Figueiredo&Nowak, 03;Daubechies et al., 04;...):

.

Algorithm

.

.

.

. ..

.

.

.

..

1 Choose an initial solution w0.

.

.

.

2 Repeat until some stopping criterion is satisfied:

w t+1 ← argminw∈Rn

(Qηt (w ; w t) + φλ(w)

)where

Qη(w ; w t) = L(w t) + ∇L⊤(w t)(w − w t)︸ ︷︷ ︸(1) Linearly approximate the loss term.

+12η

∥w − w t∥22︸ ︷︷ ︸

(2) penalizedist2 from thelast iterate.

.

Note: minimizing Qη(w ; w t) gives the ordinary gradient step.(UT / Tokyo Tech) DAL 2009-09-15 16 / 42

Page 22: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Earlier study: ISTargmin

w∈Rn

(Qηt (w ; w t) + φλ(w)

)= argmin

w∈Rn

(const. + ∇L⊤(w t)(w − w t) +

12ηt

∥w − w t∥22 + φλ(w)

)= argmin

w∈Rn

(1

2ηt∥w − w̃ t∥2

2 + φλ(w)

)︸ ︷︷ ︸assume this min can be obtained analytically

=: STηtλ(w̃ t)

where w̃ t = w t − ηt∇L(w t) (gradient step)

Finally,

w t+1 ← STηtλ︸ ︷︷ ︸shrink

(w t − ηt∇L(w t)

)︸ ︷︷ ︸gradient step

Pro: easy to implement.Con: bad for poorly conditioned A.

(UT / Tokyo Tech) DAL 2009-09-15 17 / 42

Page 23: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Earlier study: ISTargmin

w∈Rn

(Qηt (w ; w t) + φλ(w)

)= argmin

w∈Rn

(const. + ∇L⊤(w t)(w − w t) +

12ηt

∥w − w t∥22 + φλ(w)

)= argmin

w∈Rn

(1

2ηt∥w − w̃ t∥2

2 + φλ(w)

)︸ ︷︷ ︸assume this min can be obtained analytically

=: STηtλ(w̃ t)

where w̃ t = w t − ηt∇L(w t) (gradient step)

Finally,

w t+1 ← STηtλ︸ ︷︷ ︸shrink

(w t − ηt∇L(w t)

)︸ ︷︷ ︸gradient step

Pro: easy to implement.Con: bad for poorly conditioned A.

(UT / Tokyo Tech) DAL 2009-09-15 17 / 42

Page 24: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Earlier study: ISTargmin

w∈Rn

(Qηt (w ; w t) + φλ(w)

)= argmin

w∈Rn

(const. + ∇L⊤(w t)(w − w t) +

12ηt

∥w − w t∥22 + φλ(w)

)= argmin

w∈Rn

(1

2ηt∥w − w̃ t∥2

2 + φλ(w)

)︸ ︷︷ ︸assume this min can be obtained analytically

=: STηtλ(w̃ t)

where w̃ t = w t − ηt∇L(w t) (gradient step)

Finally,

w t+1 ← STηtλ︸ ︷︷ ︸shrink

(w t − ηt∇L(w t)

)︸ ︷︷ ︸gradient step

Pro: easy to implement.Con: bad for poorly conditioned A.

(UT / Tokyo Tech) DAL 2009-09-15 17 / 42

Page 25: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Earlier study: ISTargmin

w∈Rn

(Qηt (w ; w t) + φλ(w)

)= argmin

w∈Rn

(const. + ∇L⊤(w t)(w − w t) +

12ηt

∥w − w t∥22 + φλ(w)

)= argmin

w∈Rn

(1

2ηt∥w − w̃ t∥2

2 + φλ(w)

)︸ ︷︷ ︸assume this min can be obtained analytically

=: STηtλ(w̃ t)

where w̃ t = w t − ηt∇L(w t) (gradient step)

Finally,

w t+1 ← STηtλ︸ ︷︷ ︸shrink

(w t − ηt∇L(w t)

)︸ ︷︷ ︸gradient step

Pro: easy to implement.Con: bad for poorly conditioned A.

(UT / Tokyo Tech) DAL 2009-09-15 17 / 42

Page 26: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Introduction Objective

Summary so far

We want to solve:

minimizew∈Rn

fℓ(Aw) + φλ(w).

fℓ is convex and twice differentiable.φλ(w) is a convex function for which theminimization:

STλ(z) = argminw∈Rn

(12∥w − z∥2

2 + φλ(w)

)can be carried out analytically, e.g.,φλ(w) = λ∥w∥1.Exploit the non-differentiability of φλ:more sparsity → more efficiency.Robustify against poor conditioning of A.

λ−λ z

ST(z)

(UT / Tokyo Tech) DAL 2009-09-15 18 / 42

Page 27: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 19 / 42

Page 28: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 20 / 42

Page 29: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

.

Proximal Minimization (Rockafellar, 1976)

.

.

.

. ..

.

.

.

..

1 Choose an initial solution w0.

.

. .2 Repeat until some stopping criterion is satisfied:

w t+1 ← argminw∈Rn

(fℓ(Aw)︸ ︷︷ ︸

No approximation

+φλ(w) +1

2ηt∥w − w t∥2

2︸ ︷︷ ︸penalize dist2

from the lastiterate.

)

Letfη(w)=minw̃∈Rn

(fℓ(Aw̃) + φλ(w̃) + 1

2η∥w̃ − w∥22

).

Fact 1: fη(w) ≤ f (w) = fℓ(Aw) + φλ(w).Fact 2: fη(w∗) = f (w∗).

Linearly approximate the loss term → IST

−λ 0 λ

fη(w)

f(w)

(UT / Tokyo Tech) DAL 2009-09-15 21 / 42

Page 30: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

The difference

IST: linearly approximates the loss term:

fℓ(Aw) ≅ fℓ(Aw t) + (w − w t)⊤A⊤∇fℓ(Aw t)

→ tightest at the current iterate w t

DAL (proposed): linearly lower bounds the loss term:

fℓ(Aw) = maxα∈Rm

(−f ∗ℓ (−α) − w⊤A⊤α

)→ tightest at the next iterate w t+1

(UT / Tokyo Tech) DAL 2009-09-15 22 / 42

Page 31: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

The algorithm

.

IST (Earlier study)

.

.

.

. ..

.

.

.

. . 1 Choose an initial solution w0.

.

..

2 Repeat until some stopping criterion is satisfied:

w t+1 ← STηtλ

(w t + ηtA⊤(−∇fℓ(Aw t))

)

.

Dual Augmented Lagrangian (proposed)

.

.

.

. ..

.

.

.

.

.

1 Choose an initial solution w0 and a sequence η0 ≤ η1 ≤ · · · .

.

.

.

2 Repeat until some stopping criterion is satisfied:

w t+1 ← STηtλ

(w t + ηtA⊤αt

)where

αt = argminα∈Rm

(f ∗ℓ (−α) +

12ηt

∥STηtλ(w t + ηtA⊤α)∥22

)(UT / Tokyo Tech) DAL 2009-09-15 23 / 42

Page 32: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Numerical examples

IST

DAL

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

(UT / Tokyo Tech) DAL 2009-09-15 24 / 42

Page 33: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Derivation

w t+1 ← argminw∈Rn

(fℓ(Aw) + φλ(w) +

12ηt

∥w − w t∥22

)= argmin

w∈Rn

{maxα∈Rm

(−f ∗ℓ (−α) − w⊤A⊤α

)+ φλ(w) +

12ηt

∥w − w t∥22

}

Exchange the order of min and max, calculation, and calculation...

(UT / Tokyo Tech) DAL 2009-09-15 25 / 42

Page 34: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Derivation

w t+1 ← argminw∈Rn

(fℓ(Aw) + φλ(w) +

12ηt

∥w − w t∥22

)= argmin

w∈Rn

{maxα∈Rm

(−f ∗ℓ (−α) − w⊤A⊤α

)+ φλ(w) +

12ηt

∥w − w t∥22

}

Exchange the order of min and max, calculation, and calculation...

(UT / Tokyo Tech) DAL 2009-09-15 25 / 42

Page 35: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 36: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 37: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 38: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 39: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 40: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

Augmented Lagrangian

Equality constrained problem:

minimizex

f (x), ⇔ minimizex

Lhard(x) =

{f (x) (if c(x) = 0),+∞ (otherwise).

s.t. c(x) = 0.

Ordinary Lagrangian:

L(x , y) = f (x) + y⊤c(x)

Augmented Lagrangian:

Lη(x , y) = f (x) + y⊤c(x)+η

2∥c(x)∥2

2

L(x , y) ≤ Lη(x , y) ≤ Lhard(x)

↓ minx

d(y) ≤ dη(y) ≤ f (x∗): primal optimum(UT / Tokyo Tech) DAL 2009-09-15 26 / 42

Page 41: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Proximal minimization algorithm

.

Augmented Lagrangian Algorithm (Hestenes, 69; Powell, 69)

.

.

.

. ..

.

.

.

..

1 Choose an initial multiplier y0 and a sequence η0 ≤ η1 ≤ · · · .

.

. .2 Update the multiplier:

y t+1 ← y t + ηtc(x t)

wherex t = argmin

xLηt (x , y t)

NoteThe multiplier y t is updated as long as the constraint is violated.AL method ⇔ proximal minimization in the dual (Rockafellar, 76).

y t+1 ← argmaxy

(d(y)︸ ︷︷ ︸

=minx(f (x)+y⊤c(x))

− 12ηt

∥y − y t∥22

)

(UT / Tokyo Tech) DAL 2009-09-15 27 / 42

Page 42: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 28 / 42

Page 43: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Objective

Learn a linear combination of kernels from data.

minimizef∈H,b∈R,d∈Rn

m∑i=1

ℓi(f (xi) + b) +λ

2∥f∥2

H(d)

s.t. K (d) =n∑

i=1

djK j , dj ≥ 0,∑

j

dj ≤ 1.

max(0, 1−yf)

(UT / Tokyo Tech) DAL 2009-09-15 29 / 42

Page 44: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Representer theorem

minimizeα∈Rm,b∈R,d∈Rn

L(K (d)α + b1) +λ

2α⊤K (d)α

s.t. K (d) =n∑

i=1

djK j , dj ≥ 0,∑

j

dj ≤ 1.

(UT / Tokyo Tech) DAL 2009-09-15 30 / 42

Page 45: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Relaxing the regularization term

minimizeα∈Rm,b∈R,d∈Rn

L(K (d)α + b1) +λ

2α⊤K (d)α

s.t. K (d) =n∑

i=1

djK j , dj ≥ 0,∑

j

dj ≤ 1.

.

Introduce auxiliary variables αj (j = 1, . . . , n)

.

.

.

. ..

.

.

α⊤K (d)α = minαj∈Rm

(n∑

j=1

αj⊤K jαj

dj

)s.t.

n∑j=1

K jαj = K (d)α

(Proof) Introduce Lagrangian multiplier β and minimize

12

n∑j=1

αj⊤K jαj

dj+ β⊤

(K (d)α −

n∑j=1

K jαj

).

αj = djβ,β = α

(UT / Tokyo Tech) DAL 2009-09-15 31 / 42

Page 46: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Relaxing the regularization term

minimizeα∈Rm,b∈R,d∈Rn

L(K (d)α + b1) +λ

2α⊤K (d)α

s.t. K (d) =n∑

i=1

djK j , dj ≥ 0,∑

j

dj ≤ 1.

.

Introduce auxiliary variables αj (j = 1, . . . , n)

.

.

.

. ..

.

.

α⊤K (d)α = minαj∈Rm

(n∑

j=1

αj⊤K jαj

dj

)s.t.

n∑j=1

K jαj = K (d)α

(Proof) Introduce Lagrangian multiplier β and minimize

12

n∑j=1

αj⊤K jαj

dj+ β⊤

(K (d)α −

n∑j=1

K jαj

).

αj = djβ,β = α

(UT / Tokyo Tech) DAL 2009-09-15 31 / 42

Page 47: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Relaxing the regularization term

minimizeα∈Rm,b∈R,d∈Rn

L(K (d)α + b1) +λ

2α⊤K (d)α

s.t. K (d) =n∑

i=1

djK j , dj ≥ 0,∑

j

dj ≤ 1.

.

Introduce auxiliary variables αj (j = 1, . . . , n)

.

.

.

. ..

.

.

α⊤K (d)α = minαj∈Rm

(n∑

j=1

αj⊤K jαj

dj

)s.t.

n∑j=1

K jαj = K (d)α

(Proof) Introduce Lagrangian multiplier β and minimize

12

n∑j=1

αj⊤K jαj

dj+ β⊤

(K (d)α −

n∑j=1

K jαj

).

αj = djβ,β = α

(UT / Tokyo Tech) DAL 2009-09-15 31 / 42

Page 48: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 49: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 50: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 51: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 52: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 53: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Minimization of the upper-bound

minimizeαj∈Rm,b∈R,d∈Rn

L( n∑

j=1

K jαj + b1)

2

n∑j=1

αj⊤K jαj

dj

s.t. dj ≥ 0,∑

j

dj ≤ 1.

n∑j=1

αj⊤K jαj

dj=

n∑j=1

djαj

⊤K jαj

d2j

=n∑

j=1

dj

(∥αj∥K j

dj

)2

(n∑

j=1

dj∥αj∥K j

dj

)2 ( ∑j dj = 1

Jensen’s inequality

)

=

(n∑

j=1

∥αj∥K j

)2

↑ linear sum of RKHS norms(UT / Tokyo Tech) DAL 2009-09-15 32 / 42

Page 54: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

Equivalence of the two formulations

Penalizing the square of linear sum of RKHS norms (Bach et al.)

minimizeαj∈Rm,b∈R

L( n∑

j=1

K jαj + b1)

2

( n∑j=1

∥αj∥K j

)2(A)

Penalizing of the linear sum of RKHS norms (proposed)

⇔ minimizeαj∈Rm,b∈R

L( n∑

j=1

K jαj + b1)

+ λ′n∑

j=1

∥αj∥K j (B)

Optimality of (A): ∇αj L + λ( n∑

j=1

∥αj∥K j

)∂αj∥αj∥K j ∋ 0

Optimality of (B): ∇αj L + λ′∂αj∥αj∥K j ∋ 0

(UT / Tokyo Tech) DAL 2009-09-15 33 / 42

Page 55: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Method Multiple Kernel Learning

SpicyMKL

DAL + MKL = SpicyMKL (Sparse Iterative MKL)The bias term b, and the hinge-loss need spacial care.Soft-thresholding per kernel (↔ per variable)

STλ(αj) =

0 (∥αj∥K j ≤ λ)(∥αj∥K j − λ

)αj

∥αj∥K j(otherwise)

(UT / Tokyo Tech) DAL 2009-09-15 34 / 42

Page 56: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 35 / 42

Page 57: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

Experimental setting

Problem: lasso (square loss + L1 regularization)Comparison with:

l1_ls (interior-point method)SpaRSA (step-size improved IST)

(problem specific methods (e.g., LARS) are not considered.)Random design matrix A ∈ Rm×n (m: #observations, n:#unknowns) generated as:

A=randn(m,n); (well conditioned)A=U*diag(1./(1:m))*V’; (poorly conditioned)

Two settings:Medium Scale (n = 4m, n < 10000)Large Scale (m = 1024, n < 1e + 6)

(UT / Tokyo Tech) DAL 2009-09-15 36 / 42

Page 58: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

Results (medium scale)

Well conditioned Poorly conditioned

100 1000 100000.010.1

110

1001000

10000

CP

U ti

me

(sec

s)

DALchol (η1=1000)

DALcg (η1=1000)

SpaRSA

l1_ls

100 1000 100001

10100

#ite

ratio

ns

100 1000 1000068

10

Number of observations(a) Normal conditioning

Spa

rsity

(%

)

100 10000.010.1

110

1001000

10000

CP

U ti

me

(sec

s)

DALchol (η1=100000)

DALcg (η1=100000)

SpaRSA

l1_ls

100 1000110100100010000

#ite

ratio

ns

100 10000

2.55

Number of samples(b) Poor conditioning

Spa

rsity

(%

)

(UT / Tokyo Tech) DAL 2009-09-15 37 / 42

Page 59: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

Results (large scale)

1

10

100

1000

10000

104 105 106

CP

U ti

me

(sec

s)

DALchol (η

1=1000)

DALcg (η1=1000)

SpaRSA

l1_ls

110

1001000

10000

104 105 106#ite

ratio

ns

0

5

10

Number of unknown variables

Spa

rsity

(%

)

104 105 106

(UT / Tokyo Tech) DAL 2009-09-15 38 / 42

Page 60: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

L1-logistic regression

m = 1, 024.n = 4, 096–32, 768.

0

50

100

150lmd=0.1

CP

U ti

me

(sec

)

0

20

40

212 213 214 215

n

Spa

rsity

(%)

0

50

100lmd=1

0

20

40

212 213 214 215

n

0

50

100lmd=10

0

20

40

212 213 214 215

n

DALcg (fval)

DALcg (gap)

DALchol (gap)

OWLQN

l1_logreg

(UT / Tokyo Tech) DAL 2009-09-15 39 / 42

Page 61: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Experiments

Image classification

Picked five classes anchor, ant, cannon, chair, cup from Caltech101 dataset (Fei-Fei et al., 2004).Ten 2-class classification problems.# kernels 1,760 = Feature extraction (4) × Spatial subdivision (22)× Kernel functions (20)

Feature extraction: (a) hsvsift, (b) sift (scale auto), (c) sift (scale 4pxfixed), (d) sift (scale 8px fixed) (used van de Sande’s code)Spatial subdivision and integration: (a) whole image, (b) 2x2 grid,and (c) 4x4 grid + spatial pyramid kernel (Lazebnik et al., 06).Kernel functions: Gaussian RBF kernel and χ2 kernels using 10different scale parameters each.

(cf. Gehler & Nowozin, 09)

(UT / Tokyo Tech) DAL 2009-09-15 40 / 42

Page 62: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Summary

Outline

.

. . 1 IntroductionLasso, group lasso and MKLObjective

.

. .

2 MethodProximal minimization algorithmMultiple Kernel Learning

.

. .

3 Experiments

.

. .

4 Summary

(UT / Tokyo Tech) DAL 2009-09-15 41 / 42

Page 63: Dual Augmented Lagrangian, Proximal Minimization,  and MKL

. . . . . .

Summary

Summary

DAL (dual augmented Lagrangian)is a dual method in the dual (=primal proximal minimization)is efficient when m ≪ n.tolerates poorly conditioned design matrix A better.exploits sparsity in the solution (not in the design).Legendre transform: linear lower bound instead of linearapproximation.

Tasks:theoretical analysis.cool application.

(UT / Tokyo Tech) DAL 2009-09-15 42 / 42