27
Predicting Post-Operative Gait of Cerebral Palsy Patients Movement Research Lab. Seoul National University

Predicting Post-Operative Gait of Cerebral Palsy Patients

  • Upload
    orsen

  • View
    45

  • Download
    0

Embed Size (px)

DESCRIPTION

Predicting Post-Operative Gait of Cerebral Palsy Patients. Movement Research Lab. Seoul National University. Motivation. We want to predict the gait of post-operative patients to compensate doctors’ experience. Our goal. Predicting post-operative gait - PowerPoint PPT Presentation

Citation preview

Predicting Post-Operative Patient Gait

Predicting Post-Operative Gait of Cerebral Palsy PatientsMovement Research Lab.Seoul National University

1MotivationWe want to predict the gait of post-operative patients to compensate doctors experience. . . .

// // . // , 2Our goalPredicting post-operative gait learning from pre & post operative gait data

Motion predictorLearn a motion predictor from training data set . - : pre-operative gait (input) - : post-operative gait (output)

Given new input data, we generate a new motion using the learned predictor.

RegressionprocessPredictorNew pre-operative gait, xTraining dataNew post-operative gait input data, prediction model . Prediction model regression . 4Regression process

. . .

Pre-operative gait. . .

Post-operative gaitRegressionprocess

Motion to motion

i

iRegression . gait . gait . gait regression , Regression . . i Xi i Yi motion-to-motion regression .5Canonical Correlation Analysis (CCA)Find pairs of basis that maximize the correlation between two variables in the reduced space.

Variable X

Variable Y

CCABasis X,

Basis Y

Projection

Correlation :

RegressionReduced XReduced YWith such a training data in hand, we consider canonical correlation analysis, CCA as a good prediction model/ because CCA well explain data dependency between input and output. So CCA can minimize the prediction error./ CCA finds pairs of basis that maximize the correlation between two variables x & y in subspace. / When we perform the regression in the reduced space, the fitting errors are minimized because two variables are highly correlated in the reduced space./ X bar and Y bar represent the reduced variable x and y, respectively. / After some substitution, we can define the CCA equation /The pair basis which is solution of CCA are obtained by singular value decomposition.

//Please see our paper for the details of the derivation of this equation.// We perform the regression in the reduces pace.

6Sparse CCA

Reformulation CCA-based regressionLinear regression between pair of reduced data

Reconstruction from subspace to original space

Concatenating these matrix produces the predictor

LinearRegressionReduced motion dataReduced input data

Reduced motion dataOriginal motion dataLinearRegressionHere, I am ready to talk about the kernel CCA-based regression. /First, we project training input and output data into each basis that are computed by kernel CCA. / We can obtain matrix A that links the reduced input to the reduced pose by linear regression. / Then we also obtain matrix B that recovers from the reduced pose to the original pose. /So, by concatenating these matrix, we can determine the predictor.

8Motion synthesis

Prediction matrixProjection to the acquired basis Orientations of all joints Pre-operative gait

Post-operative gait

Here, I want to emphasize how to synthesize realistic motion. / We get the input data from /We then multiply it by prediction matrix. / Finally, we can get the all joints orientations of the body. The final pose is generated by joint mapping. 9Result - method comparison

GCD Data normalization

Design X & YGCDC3DC3DGCDPre-operativePost-operativeResult - feature graph

Thank You

Q & A

Thank you for listening to my presentation.14 Data & FeatureMany data has hundreds of variables with many irrelevant and redundant ones.

Feature is variables obtained by erasing redundant / noise variables from data.

data variable .

Feature data variable . 15Advantages of feature selectionAlleviating the effect of the curse of dimensionalityImprove a learning algorithms prediction performanceFaster and more cost-effectiveProviding a better understanding of the data

feature curse of dimensionality curse of dimensionality

data feature data curse of dimensionality prediction .

data . slide . 16L1 regularizationEffective feature selection method

L1 norm: - It is the sum of the absolute value of each component.

L1 regularization The L1 drives maximizes sparseness.

A new predicting post-operative gait can be estimated as matrix-vector multiplication. - e.g.,)

L1 sparsity term

Result

L1 regularizationWith the learned model , we can fully explain the features for each body joints. - Features can be considered as the combination of the joint information corresponding non-zero terms in the row vector of the learned model. - e.g. left knee position = 0.4 * left ankle position + 0.6 * pelvis position.

The problems Can not explain the nonlinear relationship between training input and output.

Correspondence

Pre-operative patients motionPost-operative patients motionRegression process

. . .

Pre-operative gait. . .

Post-operative gaitRegressionprocess

Pose to poseResult nave, pose to pose

Minimizing prediction errorMotion to motion - Considering a relation between remote poses in temporal domain

Gait of cerebral palsy patient

: http://www.youtube.com/watch?v=q7AokhnifG0 . 25TreatmentsWhat is Cerebral palsy ?Cerebral palsy () -

Orthopedic surgeryDistal Hamstring Lengthening (DHL) Rectus Femoris Transfer (RFT)Tendo Achilles Lengthening (TAL) Femoral Derotation Osteotomy (FDO)

[Poses of cerebral palsy patients]28Related workPredicting outcomes of rectus femoris transfer surgery [Reinbolt et al. 2009]Select a set of preoperative gait features that distinguished between good and poor outcomes

Evaluation of conventional selection criteria for lengthening for individuals with cerebral palsy [Truong et al. 2011]

29Related work

[Chai and Hodgins 2005] [Slyperand Hodgins 2008][Kim et al. 2012] [Seol et al. 2013]

. . , .

30Motion dataNumber of patients DHL+RFT+TAL : 35 FDO+DHL+TAL+RFT : 33

Total seven joints

left footright footleft femurright femurpelvisleft kneeright knee3 4 68 1, 2 . 31Nave linear regressionDirect regression analysis between pre and post-operative gait

Minimize fitting error to obtain the predictor, .

Problems ? large prediction error

Before I jump into the details of our prediction model, I want to talk about the Nave linear regression. /The following equation minimizes the fitting errors between training input and output. We then obtain linear predictor, matrix A. / Although this formulation is very simple and easy to implement, there exists potential artifacts of the results because of large estimation errors. /This equation do not consider the data dependency between x and y. /So, the solution would come up with reducing the dimension of the training examples before entering the regression step. 32Result : motion to motion + nave

Minimizing prediction errorDimension reduction

Fully explains the nonlinear relationship between training input and output - Nonlinear dimension reduction method

PCA PCA : maximum variance projection method.

Variable XVariable YProjectionData dependency ?Reduced YReduced XWhen we talk about dimensional reduction, there are many popular statistical methods. Maybe, PCA is the most familiar approach to choose. However, PCA might not be a good choice for regression purpose. As you can see here, we have two variables x & y that are training input and output, respectively. When we apply PCA to each variable, we can obtain the reduce the data by projection into the basis. We can represent the relationship of reduced x & y in 2D space. However, there is no guarantee pertaining to the data dependency of two variables in the reduced space. As a result, the prediction errors can not be minimized

//So, We now consider canonical correlation analysis, CCA. CCA finds the two different basis of each variable. The correlation between the two data sets is maximized in the reduced space. So it can reduce the estimation errors as associated with regression. In terms of approximating the original data, PCA is a good choice. PCA works by finding some basis that maximize variance of the data set. 35Kernel CCACCA may not fully explain non-linear relationship between pre-operative and post-operative motion.Non-linear CCA using the kernel trick methodTransform the data into high dimensional space

Substitute non-linear mapping into CCA

However, one of the limitation of linear CCA is that it may not fully explain the non-linear relationship between input and output very well. So our key idea is that we map training data into the high-dimensional space, which is the feature space. By applying a function phi to training data, we can transform training data into the feature space. And we can reformulate CCA into Kernel CCA by using the kernel trick method.

36Future workDesign training input & output with respect to the clinical context.

Feature selection - Alleviating the effect of the curse of dimensionality - Improve a prediction performance - Faster and more cost-effective - Providing a better understanding of the data

Mlml method. Feature selection . 37