33
Selekcja informacji dla analizy Selekcja informacji dla analizy danych danych z mikromacierzy z mikromacierzy Włodzisław Duch Włodzisław Duch , Jacek Biesiada , Jacek Biesiada Dept. of Informatics, Dept. of Informatics, Nicola Nicola u u s Copernicus University, s Copernicus University, Google: Duch Google: Duch BIT 2007 BIT 2007

Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Selekcja informacji dla analizy danych Selekcja informacji dla analizy danych z mikromacierzyz mikromacierzy

Selekcja informacji dla analizy danych Selekcja informacji dla analizy danych z mikromacierzyz mikromacierzy

Włodzisław DuchWłodzisław Duch, Jacek Biesiada, Jacek Biesiada

Dept. of Informatics, Dept. of Informatics, NicolaNicolauus Copernicus University, s Copernicus University,

Google: DuchGoogle: Duch

BIT 2007BIT 2007

Page 2: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

MikromacierzeMikromacierzeMikromacierzeMikromacierze

Sondy wykrywające przez hybrydyzację komplementarne DNA/RNASondy wykrywające przez hybrydyzację komplementarne DNA/RNA

Page 3: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Selection of informationSelection of informationSelection of informationSelection of information

• Attention: basic cognitive skill, focus, constrained activationsAttention: basic cognitive skill, focus, constrained activations• Find relevant information: Find relevant information:

– discard attributes that do not contain information, discard attributes that do not contain information, – use weights to express the relative importance,use weights to express the relative importance,– create new, more informative attributes create new, more informative attributes – reduce dimensionality aggregating informationreduce dimensionality aggregating information

• Ranking: treat each feature as independent.Ranking: treat each feature as independent.• Selection: search for subsets, remove redundant. Selection: search for subsets, remove redundant.

• Filters: universal criteria, model-independent.Filters: universal criteria, model-independent.• Wrappers: criteria specific for data models are used.Wrappers: criteria specific for data models are used.• Frapper: filter + wrapper in the final stage. Frapper: filter + wrapper in the final stage.

Page 4: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Filters & WrappersFilters & WrappersFilter approach:

• define your problem, for example assignment of class labels;

• define an index of relevance for each feature R(Xi)

• rank features according to their relevance R(Xi1) R(Xi2) .. R(Xid)

• rank all features with relevance above threshold R(Xi1) tR

Wrapper approach:

• select predictor P and performance measure P(Data)

• define search scheme: forward, backward or mixed selection

• evaluate starting subset of features Xs, ex. single best or all features

• add/remove feature Xi, accept new set Xs{Xs+Xi} if

P(Data| Xs+Xi)>P(Data| Xs)

Page 5: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Information theory - filtersInformation theory - filtersInformation theory - filtersInformation theory - filters

X – vectors, Xj – attributes, Xj=f attribute values,

Ci - class i =1 .. K, joint probability distribution p(C, Xj).

The amount of information contained in this joint distribution, summed over all classes, gives an estimation of feature importance:

21

21 1

, , lg ,

, lg ,j

K

j i ii

M K

i k i kk i

I C X p C f p C f df

p C r f p C r f

For continuous attribute values integrals are approximated by sums.

This implies discretization into rk(f) regions, an issue in itself.

Alternative: fitting p(Ci,f) density using Gaussian or other kernels.

Which method is more accurate and what are expected errors?

Page 6: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Information gainInformation gainInformation gainInformation gainInformation gained by considering the joint probability distribution p(C, f) is a difference between:

2 21 1

, , ,

lg lgj

j j j j

MK

i i k ki k

IG C X I C X I C I X I C X

p C p C p r f p r f

• A feature is more important if its information gain is larger. • Modifications of the information gain, frequently used as criteria in

decision trees, include:

IGR(C,Xj) = IG(C,Xj)/I(Xj) the gain ratio

IGn(C,Xj) = IG(C,Xj)/I(C) an asymmetric dependency coefficient

DM(C,Xj) = IG(C,Xj)/I(C,Xj) normalized Mantaras distance

Page 7: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Information indicesInformation indicesInformation indicesInformation indicesInformation gained considering attribute Xj and classes C together is also known as ‘mutual information’, equal to the Kullback-Leibler divergence between joint and product probability distributions:

21 1

,, , lg

, |

jMKi k

j i ki k i k

i i k

p C r fIG C X p C r f

p C p r f

KL p C f p C p r f

Entropy distance measure is a sum of conditional information:

, | | 2 ,I j j j j jD C X I C X I X C I C X I C I X

Symmetrical uncertainty coefficient is obtained from entropy distance:

, 1 ,j I j jU C X D C X I C I X

Page 8: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Purity indicesPurity indicesPurity indicesPurity indicesMany information-based quantities may be used to evaluate attributes.Consistency or purity-based indices are one alternative.

1

1

1, max ,

1max |

f

f

M

k i ki

kf

M

i ki

kf

ICI C f p r f p C r fM

p C r fM

For selection of subset of attributes F={Xi} the sum runs over all

Cartesian products, or multidimensional partitions rk(F).

Advantages:

simplest approach

both ranking and selection

Hashing techniques are used to calculate p(rk(F)) probabilities.

Page 9: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Correlation coefficientCorrelation coefficient

Perhaps the simplest index is based on the Pearson’s correlation coefficient (CC) that calculates expectation values for product of feature values and class values:

For feature values that are linearly dependent correlation coefficient is

or ; for complete independence of class and Xj distribution CC= 0.

How significant are small correlations? It depends on the number of samples n. The answer (see Numerical Recipes www.nr.com) is given by:

2 2

, [ 1, 1]j j

j

j

E X E E XCC X

X

ω

~ erf CC , / 2j jP X X nω ω

For n=1000 even small CC=0.02 gives P ~ 0.5, but for n=10 such CC gives only P ~ 0.05.

Page 10: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Other relevance indicesOther relevance indices

Mutual information is based on Kullback-Leibler distance, any distance measure between distributions may also be used, ex. Jeffreys-Matusita

Bayesian concentration measure is simply:

2

1 1

, ,jMK

JM j i j k i j ki k

D X P X x P P X x

ω

Many other such dissimilarity measures exist. Which is the best?

In practice they all are similar, although accuracy of calculation of indices is important; relevance indices should be insensitive to noise and unbiased in their treatment of features with many values; IT is fine.

21 1

, |jMK

BC j i j ki k

D X P X x

ω

Page 11: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

DiscretizationDiscretization

All indices of feature relevance require summation over probability distributions. What to do if the feature is continuous?

There are two solutions:

2. Discretize, put feature values in some bins, and calculate sums.

Histograms with equal width of bins (intervals);Histograms with equal number of samples per bin;Maxdiff histograms: bins starting in the middle (xi+1xi)/2 of largest gaps

V-optimal: sum of variances in bins should be minimal (good but costly).

2

1

,; , lg

Ki j

j i ji i j

P X xMI X P X x dx

P P X x

ω

1. Fit some functions to histogram distributions using Parzen windows, ex. fit a sum of several Gaussians, and integrate to get indices, ex:

Page 12: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Discretized informationDiscretized information

With partition of Xj feature values x into rk bins, joint information is

calculated as:

and mutual information as:

21 1

, , lg ,jM K

j i k i kk i

H X P x r P x r

ω

2 21 1

; ,

lg lg ,j

j j j

MK

i i k k ji k

MI X H H X H X

P P P x r P x r H X

ω ω ω

ω

Page 13: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Tree (entropy-based) discretizationTree (entropy-based) discretization

V-opt histograms are good, but difficult to create (dynamic programming techniques should be used).

Simple approach: use decision trees with a single feature, or a small subset of features, to find good splits – this avoids local discretization.

Example:

C4.5 decision tree discretization maximizing information gain, or SSV tree based separability criterion, vs. constant width bins.

Hypothyroid screening data, 5 continuous features, MI shown.

Compare the amount of mutual information or the correlation between class labels and discretized values of features using equal width discretization and decision tree discretization into 4, 8 .. 32 bins.

Page 14: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Ranking algorithmsRanking algorithmsRanking algorithmsRanking algorithmsBased on:

• MI(C,f): mutual information (information gain)• ICR(C|f): information gain ratio• IC(C|f): information from maxC posterior distribution• GD(C,f): transinformation matrix with Mahalanobis distance • JBC(C,f): Bayesian purity index

• + 7 other methods based on IC and correlation-based distances

• Relieff selection methods.• Several new simple indices; estimation errors and convergence.

• Markov blankets• Boosting • Margins

Page 15: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Selection algorithmsSelection algorithmsSelection algorithmsSelection algorithmsMaximize evaluation criterion for single & remove redundant features.

1. MI(C;f) MI(f;g) algorithm

1 -1, max ; ;k k i s ii F S

s S

S s s s MI C X MI X X

1 max ; ii F

S s MI C X

2. IC(C,f)IC(f,g), same algorithm but with IC criterion

3. Max IC(C;F) adding single attribute that maximizes IC

4. Max MI(C;F) adding single attribute that maximizes IC

5. SSV decision tree based on separability criterion.

Page 16: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

4 Gaussians in 8D4 Gaussians in 8D4 Gaussians in 8D4 Gaussians in 8D

Artificial data: set of 4 Gaussians in 8D, 1000 points per Gaussian, each as a separate class.

Dimension 1-4, independent, Gaussians centered at:

(0,0,0,0), (2,1,0.5,0.25), (4,2,1,0.5), (6,3,1.5,0.75). Ranking and overlapping strength are inversely related:

Ranking: X1 X2 X3 X4.

Attributes Xi+4 = 2Xi + uniform noise ±1.5.

Best ranking: X1 X5 X2 X6 X3 X7 X4 X8

Best selection: X1 X2 X3 X4 X5 X6 X7 X8

Page 17: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Dim Dim XX1 1 vs. vs. XX22Dim Dim XX1 1 vs. vs. XX22

Page 18: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Dim Dim XX11 vs. vs. XX55Dim Dim XX11 vs. vs. XX55

Page 19: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Ranking for 8D GaussiansRanking for 8D GaussiansRanking for 8D GaussiansRanking for 8D Gaussians

Partitions of each attribute into 4, 8, 16, 24, 32 parts, with equal width.

• Methods that found perfect ranking:

MI(C;f), IGR(C;f), WI(C,f), GD transinformation distance

• IC(f): correct, except for P8, feature 2-6 reversed (6 is the noisy version of 2).

• Other, more sophisticated algorithms, made more errors.

Selection for Gaussian distributions is rather easy using any evaluation

measure.

Simpler algorithms work better.

Page 20: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Selection for 8D GaussiansSelection for 8D GaussiansSelection for 8D GaussiansSelection for 8D GaussiansPartitions of each attribute into 4, 8, 16, 24, 32 parts, with equal width. Ideal selection: subsets with {1}, {1+2}, {1+2+3}, or {1+2+3+4} attributes.

1. MI(C;f)MI(f;g) algorithm: P24 no errors, for P8, 16, 32 small error (48)

2. Max MI(C;F): P8-24 no errors, P32 (3,47,8)

3. Max IC(C;F): P24 no errors, P8 (26), P16 (37), P32 (3,47,8)

4. SSV decision tree based on separability criterion: creates its own discretization. Selects 1, 2, 6, 3, 7, others are not important.

Univariate trees have bias for slanted distributions. Selection should take into account the type of classification system that will be used.

Page 21: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Discretization exampleDiscretization example

MI index for 5 continuous features of the hypothyroid screening data.

EP = equal width partition into 4, 8 .. 32 bins SSV = decision tree partition (discretization) into 4, 8 .. 32 bins

Page 22: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Hypothyroid: equal binsHypothyroid: equal binsHypothyroid: equal binsHypothyroid: equal bins

Mutual information for different number of equal width partitions, ordered from largest to smallest, for the hypothyroid data: 6 continuous and 15 binary attributes.

Page 23: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Hypothyroid: SSV binsHypothyroid: SSV binsHypothyroid: SSV binsHypothyroid: SSV bins

Mutual information for different number of equal SSV decision tree partitions, ordered from largest to smallest, for the hypothyroid data. Values are twice as large since bins are more pure.

Page 24: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Hypothyroid: rankingHypothyroid: rankingHypothyroid: rankingHypothyroid: rankingBest ranking: largest area under curve: accuracy(best n features).

SBL: evaluating and adding one attribute at a time (costly).

Best 2: SBL, best 3: SSV BFS, best 4: SSV beam; BA - failure

Page 25: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Hypothyroid: rankingHypothyroid: rankingHypothyroid: rankingHypothyroid: rankingResults from FSM neurofuzzy system.

Best 2: SBL, best 3: SSV BFS, best 4: SSV beam; BA – failure

Global correlation misses local usefulness ...

Page 26: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Hypothyroid: SSV rankingHypothyroid: SSV rankingHypothyroid: SSV rankingHypothyroid: SSV rankingMore results using FSM and selection based on SSV.

SSV with beam search P24 finds the best small subsets, depending on the search depth; here best results for 5 attributes are achieved.

Page 27: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Leukemia: Bayes rulesLeukemia: Bayes rulesLeukemia: Bayes rulesLeukemia: Bayes rulesTop: test, bottom: train; green = p(C|X) for Gaussian-smoothed density with =0.01, 0.02, 0.05, 0.20 (Zyxin).

Page 28: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Leukemia boostingLeukemia boosting

3 best genes, evaluation using bootstrap.

Page 29: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Leukemia boostingLeukemia boosting

3 best genes, evaluation using bootstrap.

Page 30: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Leukemia SVM LVOLeukemia SVM LVO

Problems with stability

Page 31: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

GM example, Leukemia dataGM example, Leukemia data

Two types of leukemia, ALL and AML, 7129 genes, 38 train/34 test.

Try SSV – decision trees are usually fast, no preprocessing.

1. Run SSV selecting 3 CVs; train accuracy is 100%, test 91% (3 err) and only 1 feature F4847 is used.

2. Removing it manually and running SSV creates a tree with F2020, 1 err train, 9 err test. This is not very useful ...

3. SSV Stump Field shows ranking of 1-level trees, SSV may also be used for selection, but trees separate training data with F4847

4. Standardize all the data, run SVM, 100% with all 38 as support vectors but test is 59% (14 errors).

5. Use SSV for selection of 10 features, run SVM with these features, train 100%, test 97% (1 error only).

Small samples ... but a profile of 10-30 genes should be sufficient.

Page 32: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

ConclusionsConclusionsConclusionsConclusions

About 20 ranking and selection methods have been checked.

• The actual feature evaluation index (information, consistency, correlation etc) is not so important.

• Discretization is very important; naive equi-width or equidistance discretization may give unpredictable results; entropy-based discretization is fine, but separability-based is less expensive.

• Continuous kernel-based approximations to calculation of feature evaluation indices are a useful alternative.

• Ranking is easy if global evaluation is sufficient, but different sets of features may be important for separation of different classes, and some are important in small regions only – cf. decision trees.

• The lack of stability is the main problem: boosting does not help.• Local selection/ranking with margins and stabilization is the most

promising technique.

Page 33: Selekcja informacji dla analizy danych z mikromacierzy Włodzisław Duch, Jacek Biesiada Dept. of Informatics, Nicolaus Copernicus University, Google: Duch

Open questionsOpen questionsOpen questionsOpen questions

• Discretization or kernel estimation?

• Best discretization: Vopt histograms, entropy, separability?• Margin-based selection: like SVM.• Stabilization using margins for individual vectors.• Use of feature weighting from ranking/selection to scale input data.• Use PCA on groups of redundant features. • Evaluation indices more sensitive to local information? • Statistical tests, such as Kolmogorow-Smirnov, to identify

redundant features.

Will anything work reliably for microarray feature selection?

Different methods may use information contained in selected attributes in a different way, are filters sufficient?