77
Stochastic Neighbor Compression Matt J. Kusner Stephen Tyree Kilian Q. Weinberger Kunal Agrawal

Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Stochastic Neighbor Compression

Matt J. KusnerStephen Tyree

Kilian Q. WeinbergerKunal Agrawal

Page 2: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

NN Classifier

class 1

class 3

class 2

[Cover & Hart, 1967]

Page 3: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

class 1

class 3

class 2= test input

[Cover & Hart, 1967]NN Classifier

Page 4: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

class 3

class 2

class 1

1-nearest neighbor rule[Cover & Hart, 1967]

NN Classifier

Page 5: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

class 3

class 2

class 1

1-nearest neighbor rule

during test-time...

complexity:

training inputsfeatures

O�dn

nd

[Cover & Hart, 1967]NN Classifier

Page 6: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

class 3

class 2

class 1

1-nearest neighbor rule

during test-time...

for each test input!

e.g.complexity:

training inputsfeatures

O�dn

nd

[Cover & Hart, 1967]

n = 6⇥ 104

d = 784

⇡ 47 million

O(dn�

NN Classifier

Page 7: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 8: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

1. reduce nHow can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 9: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

1. reduce n Training Consistent Sampling

Prototype Generation

Prototype Positioning

[Hart, 1968][Anguilli, 2005]

[Mollineda et al., 2002][Bandyopadhyay & Maulik, 2002]

[Bermejo & Cabestany, 1999][Toussaint 2002]

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 10: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

1. reduce n

2. reduce d

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 11: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

1. reduce n

2. reduce d

Dimensionality Reduction

[Hinton & Roweis, 2002][Tenenbaum et al., 2000]

[Weinberger et al., 2004][van der Maaten & Hinton, 2008]

[Weinberger & Saul, 2009]

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 12: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

3. use data structures

1. reduce n

2. reduce d

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 13: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

Tree Structures

Hashing

[Omohundro, 1989][Beygelzimer et al., 2006]

[Andoni & Indyk, 2006][Gionis et al., 1999]3. use data structures

1. reduce n

2. reduce d

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 14: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

3. use data structures

1. reduce n

2. reduce d

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 15: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

1-nearest neighbor rule

Dataset Compression

3. use data structures

1. reduce n

2. reduce d

How can we reduce this complexity?

complexity:

training inputsfeatures

O�dn

nd

NN Classifier

Page 16: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Main Idealearn new synthetic inputs!

‘compressed’ datatraining data

Page 17: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Main Idealearn new synthetic inputs!

‘compressed’ data

z1

"· · · zm

#training data

xn�1

"x2 xix1 · · · · · ·

#xn

y1 y2 yi yn�1 yn· · · · · · y1 ym· · ·

Page 18: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Main Idealearn new synthetic inputs!

‘compressed’ data

z1

"· · · zm

#training data

xn�1

"x2 xix1 · · · · · ·

#xn

y1 y2 yi yn�1 yn· · · · · · y1 ym· · ·

m << n

Page 19: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Main Idealearn new synthetic inputs!

use ‘compressed’ set to make future predictions

‘compressed’ data

z1

"· · · zm

#

y1 ym· · ·

Page 20: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Approach

class 1

class 3

class 2

steps to a new training set:

Page 21: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Approach

class 1

class 3

class 21. subsample

steps to a new training set:

Page 22: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Approach

class 1

class 3

class 21. subsample

steps to a new training set:

2. learn prototypes

Page 23: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Approach

class 1

class 3

class 21. subsample2. learn prototypes

steps to a new training set:

move inputs to minimize 1-nn training error

Page 24: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Learning Prototypesmove inputs to minimize 1-nn training error

Page 25: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

label of nearest neighbor to in set {z1, . . . , zm}

xi

true label

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

move inputs to minimize 1-nn training error

Learning Prototypes

Page 26: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

label of nearest neighbor to in set {z1, . . . , zm}

xi

true label

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

xi

z1z2

z3 z4

move inputs to minimize 1-nn training error

Learning Prototypes

Page 27: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

not continousnot differentiable

move inputs to minimize 1-nn training error

Learning Prototypes

Page 28: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

[Hinton & Roweis, 2002]Stochastic Neighborhood

move inputs to minimize 1-nn training error

Learning Prototypes

Page 29: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

[Hinton & Roweis, 2002]Stochastic Neighborhood

xi

move inputs to minimize 1-nn training error

Learning Prototypes

Page 30: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

[Hinton & Roweis, 2002]Stochastic Neighborhood

xi

move inputs to minimize 1-nn training error

Learning Prototypes

pi

probability --- is predicted correctly

xi

Page 31: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

[Hinton & Roweis, 2002]Stochastic Neighborhood

xi

move inputs to minimize 1-nn training error

Learning Prototypes

pipi

probability --- is predicted correctly

xi1

Z

X

j:yj=yi

exp(��2kxi � zjk22),

Page 32: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

xi

move inputs to minimize 1-nn training error

Learning Prototypes

pipi

probability --- is predicted correctly

xi1

Z

X

j:yj=yi

exp(��2kxi � zjk22),

min

z1,...,zm

nX

i=1

� log(pi)

minimize negative log likelihood

Page 33: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

minz1,...,zm

nX

i=1

[nn{z1,...,zm}(xi) 6= yi]

xi

move inputs to minimize 1-nn training error

Learning Prototypes

pipi

probability --- is predicted correctly

xi1

Z

X

j:yj=yi

exp(��2kxi � zjk22),

min

z1,...,zm

nX

i=1

� log(pi)

minimize negative log likelihood

use conjugate gradient descent!

Page 34: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[a motivating visualization]

Page 35: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

- 38 people- ~64 images/person- lighting changes

Yale-Faces

Page 36: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

- 38 people- ~64 images/person- lighting changes

Yale-Faces

Page 37: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

ResultsYale-Faces

- lighting changes

- 38 people- ~64 images/person

Page 38: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

- 38 people- ~64 images/person- lighting changes

Yale-Faces

Page 39: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultssubsampled real faces

Page 40: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultssubsampled real faces

Page 41: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

Page 42: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultslearned faces

Page 43: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[datasets]

Page 44: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

Page 45: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultstraining inputs

Page 46: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultsnumber of classes

Page 47: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultsoriginal & reduced features

Page 48: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultsoriginal & reduced features

[Weinberger & Saul, 2009]

Page 49: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[error]

Page 50: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

Page 51: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

ratio of training inputs in compressed set

Page 52: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

test error

Page 53: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

Page 54: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

on full training set

Page 55: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

initialization

Page 56: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

training set consistent sampling

Page 57: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

our method

Page 58: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results

Page 59: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[test-time speed-up]

Page 60: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-treeslocality-sensitive hashing (LSH)

[test-time speed-up]

Page 61: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-trees

[test-time speed-up]

locality-sensitive hashing (LSH)

Page 62: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-trees

compression rate

[test-time speed-up]

locality-sensitive hashing (LSH)

1-NN full dataset

1-NN after SNC

Page 63: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-trees

compression rate

[test-time speed-up]

locality-sensitive hashing (LSH)

ball-trees full dataset

ball-trees after SNC

Page 64: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-trees

compression rate

[test-time speed-up]

locality-sensitive hashing (LSH)

LSH full dataset

LSH after SNC

Page 65: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-treeslocality-sensitive hashing (LSH)

[test-time speed-up]

Page 66: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Resultscomplementary approaches

ball-treeslocality-sensitive hashing (LSH)

[test-time speed-up]

Page 67: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Summary

subsample after optimizationoriginal data

▫ learns a compressed training set for NN classifierStochastic Neighbor Compression

Page 68: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Summary▫ learns a compressed training set for NN classifier

▫ Stochastic Neighborhood[Hinton & Roweis, 2002]

subsample after optimizationoriginal data

xi

Stochastic Neighbor Compression

Page 69: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Summary▫ learns a compressed training set for NN classifier

▫ Stochastic Neighborhood[Hinton & Roweis, 2002]

subsample after optimizationoriginal data

▫ Compression by 96% without error increase in 5/7 cases

Stochastic Neighbor Compression

Page 70: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Summary▫ learns a compressed training set for NN classifier

▫ Stochastic Neighborhood[Hinton & Roweis, 2002]

subsample after optimizationoriginal data

▫ Compression by 96% without error increase in 5/7 cases

▫ test-time speed-ups on top of NN data structures

Stochastic Neighbor Compression

Page 71: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Thank you!Questions?

Page 72: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

Page 73: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

Page 74: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

noise added to training labels

Page 75: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

using larger k

Page 76: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

training set consistent sampling

Page 77: Stochastic Neighbor Compression - GitHub Pagesmkusner.github.io/presentations/Stochastic_Neighbor_Compression.pdf · 1-nearest neighbor rule 1. reduce n Training Consistent Sampling

Results[robustness to label noise]

our method