14
40 Vol. 40  No. (工 版) JOURNALOFSHANDONGUNIVERSITYENGINEERINGSCIENCE 2010 Jun.2010  Receiveddate 2009 12 28 Foundationitem ThisresearchwassupportedbyNSF USA Biography SHENGWeihua 1972 ), male Ph.D. assistantprofessor hisresearchinterestsincludehumanrobotinteraction wearablecomputing andmobilesensornetworks.Email weihua.sheng okstate.edu ZHUChun 1983 ), female Ph.Dstudent herresearchinterestsincludehumanbehaviorrecognitionandhumanrobotinteraction. Email chunz okstate.edu 文章编号: 1672 3961 2010 03 0037 14 A wearablecomputingapproachforhandgestureanddaily activityrecognitioninhumanrobotinteraction SHENGWeihua ZHUChun SchoolofElectricalandComputerEngineering OklahomaStateUniversity Stillwater OK 74078 USA Abstract Humanrobotinteraction HRI isanimportanttopicinrobotics especiallyinassistiverobotics.Inthispa per weaddressedtheHRIprobleminasmartassistedliving SAIL systemforelderlypeople patients andthedisa bled.TwoproblemswereslovedthatareveryimportantfordevelopingnaturalHRI handgesturerecognitionanddaily activityrecognition.Fortheproblemofhandgesturerecognition aninertialsensoriswornonafingerofthehuman subjecttocollecthandmotiondata.Aneuralnetworkisusedforgesturespottingandatwolayerhierarchicalhidden Markovmodel HHMMisappliedtointegratethecontextinformationinthegesturerecognition.Fortheproblemof dailyactivityrecognition twoinertialsensorsareattachedtoonefootandthewaistofthesubject.Amultisensorfu sionschemewasdevelopedforrecognition.First datafromthesetwosensorsarefusedforcoarsegrainedclassifica tion.Second thefinegrainedclassificationmodulebasedonheuristicdiscriminationorhiddenMarkovmodels HMMs areappliedtofurtherdistinguishtheactivities.Experimentswereconductedusingaprototypewearablesensor systemandtheobtainedresultsprovedtheeffectivenessandaccuracyofouralgorithms. Keywords humanrobotinteraction hiddenMarkovmodel neuralnetworks 人机交互中基于可穿戴式计算的 手势和活动辨识 盛卫华,祝纯 (俄克拉荷马州立大学电气与计算机学院,美国俄克拉荷马州止水市 74078OK 摘要: 人与机器人交互是机器人技术领域、尤其是生活辅助机器人领域的重要课题。本文以辅助老年人、病人和 残疾人为应用背景,提出了“智能辅助生活系统”( SAILSystem ),并解决了该系统中人的手势识别和日常动作识 别两个重要问题。对于手势识别问题,本文采用一个惯性传感器来采集被试验人手指部位活动的信号,运用人工 神经网络进行手势捕捉,并应用一个分层隐马尔可夫模型结合前后手势的关联信息,来提高手势识别的准确率。 对于动作识别问题,数据来源于位于被试验人一侧的脚面和腰部的两个惯性传感器,并采用多传感器融合方法识 别各种日常动作。在对两个传感器的数据进行融合的粗分类之后,细分类应用了隐马尔可夫模型和启发式方法 来进一步识别各个动作类型。该穿戴式传感器系统经过实验测试,结果证明了本识别算法的有效性和精确性。 关键词: 人与机器人交互;隐马尔可夫模型;神经网络 中图分类号: TP 391 4    文献标志码:

人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

书书书

 第40卷 第3期Vol.40  No.3     

山 东 大 学 学 报 (工 学 版)JOURNALOFSHANDONGUNIVERSITY(ENGINEERINGSCIENCE)

    2010年6月 Jun.2010 

Receiveddate:20091228Foundationitem:ThisresearchwassupportedbyNSF,USABiography:SHENGWeihua(1972),male,Ph.D.,assistantprofessor,hisresearchinterestsincludehumanrobotinteraction,wearablecomputing

andmobilesensornetworks.Email:weihua.sheng@okstate.eduZHUChun(1983),female,Ph.Dstudent,herresearchinterestsincludehumanbehaviorrecognitionandhumanrobotinteraction.Email:chunz@okstate.edu

 文章编号:16723961(2010)03003714

Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction

SHENGWeihua,ZHUChun(SchoolofElectricalandComputerEngineering,OklahomaStateUniversity,Stillwater,OK,74078,USA)

Abstract:Humanrobotinteraction(HRI)isanimportanttopicinrobotics,especiallyinassistiverobotics.Inthispaper,weaddressedtheHRIprobleminasmartassistedliving(SAIL)systemforelderlypeople,patients,andthedisabled.TwoproblemswereslovedthatareveryimportantfordevelopingnaturalHRI:handgesturerecognitionanddailyactivityrecognition.Fortheproblemofhandgesturerecognition,aninertialsensoriswornonafingerofthehumansubjecttocollecthandmotiondata.AneuralnetworkisusedforgesturespottingandatwolayerhierarchicalhiddenMarkovmodel(HHMM)isappliedtointegratethecontextinformationinthegesturerecognition.Fortheproblemofdailyactivityrecognition,twoinertialsensorsareattachedtoonefootandthewaistofthesubject.Amultisensorfusionschemewasdevelopedforrecognition.First,datafromthesetwosensorsarefusedforcoarsegrainedclassification.Second,thefinegrainedclassificationmodulebasedonheuristicdiscriminationorhiddenMarkovmodels(HMMs)areappliedtofurtherdistinguishtheactivities.Experimentswereconductedusingaprototypewearablesensorsystemandtheobtainedresultsprovedtheeffectivenessandaccuracyofouralgorithms.Keywords:humanrobotinteraction;hiddenMarkovmodel;neuralnetworks

人机交互中基于可穿戴式计算的

手势和活动辨识

盛卫华,祝纯(俄克拉荷马州立大学电气与计算机学院,美国 俄克拉荷马州 止水市 74078OK)

摘要:人与机器人交互是机器人技术领域、尤其是生活辅助机器人领域的重要课题。本文以辅助老年人、病人和

残疾人为应用背景,提出了“智能辅助生活系统”(SAILSystem),并解决了该系统中人的手势识别和日常动作识别两个重要问题。对于手势识别问题,本文采用一个惯性传感器来采集被试验人手指部位活动的信号,运用人工

神经网络进行手势捕捉,并应用一个分层隐马尔可夫模型结合前后手势的关联信息,来提高手势识别的准确率。

对于动作识别问题,数据来源于位于被试验人一侧的脚面和腰部的两个惯性传感器,并采用多传感器融合方法识

别各种日常动作。在对两个传感器的数据进行融合的粗分类之后,细分类应用了隐马尔可夫模型和启发式方法

来进一步识别各个动作类型。该穿戴式传感器系统经过实验测试,结果证明了本识别算法的有效性和精确性。

关键词:人与机器人交互;隐马尔可夫模型;神经网络

中图分类号:TP391.4    文献标志码:A

Page 2: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 38    山 东 大 学 学 报 (工 学 版) 第40卷 

1 Introduction

11 Motivation Thepastdecadehasseenasteadygrowthofelderlypopulation.Thebabyboomerscomprisenearly20percentoftheU.S.population,whichisequalto761millionAmericans[1].In2010manyofthemwillturn65andarepronetohealthcomplications.Thismaycauseanincreasedburdenonthemedicalindustry.Comparedtotherestofthepopulation,moreseniorslivealoneasthesoleoccupantsofaprivatedwellingthananyotherpopulationgroup.Therefore,elderlypeoplelivingaloneareanatriskgroup.Helpingthemtoliveabetterlifeisveryimportantandhasgreatsocietalbenefits. Manyresearchersareworkingonnewtechnologiessuchasassistiverobotstohelpelderlypeople.Haighetal.[2]providedasurveyonassistiverobotsusedascaregivers.Themainstream ofassistiveroboticsresearchfocusesonmanipulatingassistancedevicessuchasgripperstohelppeopleeat,electronictravelaidstoguidepeopletowalk,andintelligentwheelchairstomovepeoplearound.Inrecentyears,severalresearchershaveenvisionedacompanionrobotthatliveswithpeoplelikeapet.Forexample,Haaschetal.[3] developed theBielefeld RobotCompanionwhichcommunicateswithnonexpertusersinanaturalandintuitiveway.Fritschetal.presentedSIRCLE[4],asysteminfrastructureprovidingasoftwareplatformforarobotcompanionwhichexhibitspowerfulcapabilitiesinhumanrobotinteraction(HRI)[5]. Wearedevelopingasmartassistedliving(SAIL)system[67]toprovidesupporttoelderlypeopleintheirhousesorapartments.AsillustratedinFigure1,theSAIL system consistsofabodysensornetwork(BSN)[8],acompanionrobot,asmartphone,andaremotehealthprovider.Thebodysensornetworkcollectsmotiondataandvitalsignsofthehumansubjectandsendsthemwirelessly(forexample,throughZigbee[9])tothecompanionrobot,whichinfersthehumanintentionsandconditionsfromthesedataandrespondscorrespondingly.Thesmartphoneservesasagatewaytoaccesstheexpertiseofremotehealthcareproviders,ifneeded.Forexample,whenthereisa

detectedmedicalemergencyormishapsuchasfallingdownonthefloor,theremotehealthprovidercancontrolthecompanionrobottoobserveandhelpthehumansubjectthroughawebbasedinterfaceandajoystick.

Fig.1 Theoverviewofthesmartassistedliving(SAIL)system

 Thebodysensornetworkconsistsofwearablesensornodesattachedtothechest,oneoftheankles,thewaistandoneofthefingersofthehumansubject,respectively.Suchaminimalsetofsensornodesreducestheobtrusivenesstotheminimum.Eachnodehasaminiaturemicrocontroller,aZigbeecommunicationmoduleaswellasaninertialsensorandtheassociatedsignalconditioningcircuits.Theinertialsensorconsistsofa3axisaccelerometer,a3axisgyro,andacompass.Additionally,inordertocollectthevitalsignsofthehumansubject,thechestnodehasamicrophoneandatemperaturesensor,whilethefingernodehasabloodpressuresensorandapulseoximeter. Naturalhumanrobotinteractionisaveryimportantissueinthedesignofassistiverobotics,especiallyforelderlypeople,whousuallysufferfrom problemswithspeech[10],orhavedifficultyinlearningnewcomputerskills[11],thereforeitisdesirabletomaketherobotabletonotonlyunderstandexplicithumanintentionsfromgestures,butalsorecognizethehumandailyactivities,fromwhichimplicithumanintentionsmaybeinferred.Sucharobotcapabilityiscalledconsiderateintelligence[67].Inthispaper,we

Page 3: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 39     

focusonsolvingtwoproblemscentraltonaturalHRI:handgesturerecognitionandhumandailyactivityrecognition.Comparedtotheexistingwork,wemadetwomaincontributions:(i)wedevelopedalightweightandresourceawarehandgesturerecognitionalgorithmthatconsidersthecontextinformationrepresentedbythesequentialconstraintsbetweendifferentcommands;(ii)wedevelopedamultisensorfusionschemeforaccuratedailyactivityrecognition. Thispaperisorganizedasfollows.Therestofthissectionintroducessomerelatedworkinhandgesturerecognitionandhumandailyactivityrecognition.SectionIIdevelopsthealgorithmforhandgesturerecognition.SectionIIIdescribesthealgorithmforhumandailyactivityrecognition.TheexperimentaltestsandresultsarepresentedinSectionIV.ConclusionsaregiveninSectionV.1.2 Relatedwork Researchershavemadesignificantprogressintheareaofhumanrobotinteractioninrecentyears.AcomprehensivesurveyofthisareaisprovidedbyYancoetal.[5,12].TheycategorizedtheexistingHRIresearchbasedoncriteriasuchasautonomy,intervention,humanrobotratio,andinteraction.AshandgesturerecognitionandhumandailyactivityrecognitionareessentialtonaturalHRI,wearegoingtoreviewsomerelatedworkinbothareas.121 Handgesturerecognition Traditionalgesturerecognitionisbasedonvisualinformation.Atypicalapproachforvisionbasedgesturerecognitionhastwosteps:first,featureextractionusingcolordetection,edgedetection,andbackgroundremovingtechniques,etc;second,patternrecognitionusingmachinelearningalgorithms,suchashiddenMarkovmodels(HMMs)[13]andneuralnetworks[14].Moreworksinthisareacanbefoundin[15]. Recently,duetotheadvancementinMEMsandVLSItechnologies,wearablesensorbasedgesturerecognitionhasbeengainingattention.Comparedtovisionbasedgesturerecognition,wearablesensorbasedrecognitionhastwoadvantages.First,forvisionbasedgesturerecognition,camerasneedtobeinstalledpriortotheexperimentsandenvironmentalconditions(brightness,contrastandobstacles,etc)havesignificantimpactsontheimagedata.Onthe

contrary,wearablesensorswillnotbeaffectedbytheirsurroundings.Second,wearablesensorbasedgesturerecognitionrequireslessdatacomparedtovisionbasedrecognition.Typicalwearablesensorsincludeinertialsensorsandglovesensors[1617].Otherwearablesensorssuchasmicrophones,barometers,andthermometerscanprovidecomplementaryinformationinwearablesensorsystems[18]. Thereissomeexistingworkonhandgesturerecognitionfromvideodatasources.However,thereisnotmuchworkonrecognitionusingwearablesensorasadatasource.Animportantproblemingesturerecognitionistosegmentgesturesfromnongesturesmovements,whichiscalledthegesturespottingproblem[19].Therearetwomainmethods:rulebasedmethodsandHMMbasedmethods.Rulebasedmethodsarewidelyusedinvisionbasedrecognition.Someresearchersuseaspecialpositiontomarkthestartorendpointofagesture[20],whileothersdefinerulesforthebehaviorbeforeorafteragesture[21],suchasstayingstillforseveralseconds.Ramamoorthyetal.[20]implementedamethodthatmovedthehandinandoutofthesightofacameratorepresentthestartandendpointofagesture.Lenmanetal.[21]

definedgestureswhichconsistofastartpose,atrajectory,andaselectionpose.HMMbasedmethodsmaximizethelikelihoodintimeseriessignalsusingdifferenthiddenMarkovmodelsthatrepresentdifferentclassesofdata[2223].Leeetal.[22]introducedathresholdmodelthatcalculatesthelikelihoodthresholdofaninputpatternandprovidesaconfirmationmechanismfortheprovisionallymatchedgesturepatterns.Overall,therulebasedmethodsareeasytoimplementbutarenotconvenientforelderlypeopletouse.TheHMMbasedmethodsdonothavesuchrequirementforthehumansubject.However,thecomputationalcostishighduetotheuseofHMMs.122 Humandailyactivityrecognition Manysolutionshavebeendevelopedforhumandailyactivityrecognitionovertheyears,includingtheheuristicanalysismethods[2425],thediscriminativemethods[2627],thegenerativemethods[13],andsomecombinationsofthesemethods[28]. Heuristicanalysismethodsarebasedonthedirectcharacteristicanalysisandthedescriptionofthedata

Page 4: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 40    山 东 大 学 学 报 (工 学 版) 第40卷 

fromsensors.Forexample,Aminianetal.[24]developedanalgorithmbasedontheanalysisoftheaverageandthedeviationoftheaccelerationsignaltoclassifytheactivitiesintofourcategories:lying,sitting,standingandlocomotion.Discriminativemethodsanalyzefeaturesextractedfrom sensordatasegmentationswithoutconsideringsequentialconnectionsinthedata.Forexample,in[29],principalcomponentsanalysis(PCA)[30]andindependentcomponentanalysis(ICA)[31] areusedinthefeaturegenerationprocesswithwavelettransform ofthesensordata.Generativemethodsusegenerativemodelsfortheprobabilitybasedobservationswithhiddenparameters.Itspecifiesajointprobabilitydistributionoverobservationandlabelsequences.Forexample,DeVauletal.[32]developedatwolayermodelthatcombinesamulticomponentGaussianmixturemodel[33]

withMarkovmodelstoaccuratelyclassifyarangeofuseractivitystates,includingsitting,walking,andbiking.Bycombiningdifferentmethods,theadvantagesofeachmethodcanbebetterutilizedtosolvecomplicatedproblems.Lesteretal.[28] presentedahybridapproachtorecognizehumandailyactivities,whichcombinesboosting[34]andHMM.Boostingisusedtodiscriminativelyselectusefulfeatures,andtheHMMisusedtorecognizedifferentactivities. Tosummarize,heuristicanalysismethodsrequireintuitiveanalysisontherawsensordataorthefeaturesfrom data,andthecharacteristicsmaydifferfromeachindividual.Therefore,itisdifficulttofindaubiquitouswayforobservation.Onthecontrary,sincediscriminativemethodsandgenerativemethods

aremachinelearningalgorithms,theparameterscanbetrainedusingdatafromdifferentindividuals.However,theirdisadvantageisthehighcomputationalcost.Thecombinationofdifferentmethodscanachievebetterperformancethananysinglemethod.

2 Handgesturerecognition

 InourSAILsystem,differenthandmovementpatternsareusedtocommandthecompanionrobot,muchlikethewaypeoplecommandadog.Fivebasichandgesturesareassignedtofivecommandswhichmean“come”,“gofetching”,“goaway”,“sitdown”,and“standup”,respectively.Inthissection,wewilldiscussouralgorithmforhandgesturerecognition,whichcombinestheneuralnetworkbasedgesturespottingandthehierarchicalhiddenMarkovmodel(HHMM)basedgestureclassification. Sincemostembeddedcomputingsystemshavelimitedbatteriesandcomputationpower,itisimportanttodesignrecognitionalgorithmsthatareresourceawareandlightweight.AsshowninFigure2,therecognitionalgorithmconsistsoftwomodules:(1)thesegmentationmodulewhichusesaneuralnetworktorealizegesturespotting,and(2)therecognitionmodulewhichusesanHHMM toclassifygestures.SincetheHHMM isaprobabilisticmodelwithhighcomputationalcost,theNNbasedsegmentationmoduleisusedasaswitchtocontrolthedataflowinordertosavethecomputationtimeandincreasetheefficiency.

Fig.2 Theflowchartofthehandgesturerecognitionalgorithm

 Aneuralnetworkisappliedinthesegmentationmodule to discriminate gesturesfrom ongesturemovements.Wefindthatsimplyusingasinglethresholdonthesensordatacannotclassifygestures

andnongesturemovementsaccurately.Onthecontrary,theneuralnetworkisacombinationofmultiplethresholdsfordifferentfeatures.Throughthetrainingoftheneuralnetwork,theweightsandbiasescanbe

Page 5: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 41     

optimizedforclassification.Furthermore,theneuralnetworkisamachinelearningalgorithm,whichcanobtainhiddeninformationfromthetrainingdataandmakeagoodcombinationoffeaturestoperformtheclassificationforgesturesand nongesturemovements. Inourexperiments,therawsensordataaresampledat150Hz,andawindowof20points(133ms)isappliedonittoextractfeaturevectors,whicharefedintotheneuralnetworktodistinguishgesturesandnongesturemovements.Then,aheuristicthresholdforthetimedurationofthesameoutputoftheneuralnetworkisusedinthesegmentationmoduletodetectthestartorendpointofthegesture.TheoutputofthesegmentationmoduletriggerstheHHMMbasedrecognitionmodulewhenagestureisspotted.2.1 Gesturespottingusinganeuralnetwork Weimplementedathreelayerfeedforwardneuralnetwork[14]todistinguishgesturesfromdailynongesturemovements.Theinputisafeaturevectorextractedfromtherawsensordata.Inourcurrentimplementation,3Dangularvelocity[ωx,ωy,ωz]

Tand3Dacceleration[ax,ay,az]

Tarerecordedastherawsensordata.Weusethefollowingfeatures: · the6Dmean[珚ωx,珚ωy,珚ωz,珔ax,珔ay,珔az]

T,

 · the6Dvariance[σ2ωx,σ2ωy,σ

2ωz,σ

2ax,σ

2ay,σ

2az]

T. Theoutputoftheneuralnetworkisbinary(1or0),whichstandsforgesturesornongesturemovements,respectively.Theneuralnetworkfunctionsofthefirstandthesecondlayersarethelogsigmoidfunctionsandthethirdlayerhasthehardlimitfunction[14].Thefirstandthesecondlayersforma2layerfeedforwardnetwork,andtheoptimizedparametersareobtainedthroughtraining.Intheoutputlayer,theweightsandbiasesarefixedtogeneratediscreteoutputs. Supervisedlearning[14]isusedtotraintheneuralnetworkfromthelabeledtrainingdata.Inordertoavoidthetrainingtrappedinthelocalminimum,werunthetrainingseveraltimestoachievelessmeansquareerror.Thenumberofneuronsineachlayeriscarefullyselectedforbetteraccuracyandavoidingoverfittingaswell. Inourcurrentimplementation,weassumethatnongesturemovementsareslow becausewhenpeople

read,write,walk,andeat,theirhandsdonotexhibitintensivemotions.Forunexpectedmovementsandrapidnongesturemovements,wecanuseathresholdbasedHMM likelihooddiscriminant[22]todistinguishwhetheritisagestureornotinthefuture.22 HHMMbasedrecognitionalgorithm Inthissection,wewillfirstintroducethebasicconceptsofHMMs,andthendescribetheHHMMbasedhandgesturerecognitionmethodthatconsidersthesequentialconstraintsinhandgestures. Peopleusuallydemonstratespecificpatternswhentheyinteractwiththeirpets.Suchpatternsreflectthesequentialconstraintsinthegestures,whichcanbeusedtoimprovethegesturerecognitionaccuracy.Inthispaper,thehierarchicalhiddenMarkovmodel(HHMM)techniqueisimplementedinordertoincreasetherecognitionaccuracy.TheHHMMisastatisticalmodelderivedfromthehiddenMarkovmodel.Werecognizegesturesthroughtwosteps:first,usetheHMMsatthelowerleveltorecognizeindividualhandgestures;second,modeltheconstraintsamongthegestureswiththeupperlevelHMM andestimatethemostlikelystatesequenceintheupperlevelHMMtocorrectclassificationerrorswhicharemadeinthelowerlevelHMM.HiddenMarkovmodelsarestatisticalmodelsforsequentialdatarecognition.Ithasbeenwidelyusedinspeechrecognition,handwritingrecognition,andpatternrecognition[13].AnHMMischaracterizedbyasetofparametersλ=(A,B,π),whereA,B,andπarethestatetransitionprobabilitydistribution,theobservationsymbolprobabilitydistributionsineachstate,andtheinitialstatedistribution,respectively.Theforwardbackwardprocedure[3536]isusedinordertoestimatethelikelihoodP(O|λ)ofasequenceofobservationsgivenaspecificHMM.TheViterbiAlgorithm[37]isusedtofindthesinglebeststatesequenceQforthegivenobservationsequenceOinthetestingmode.TheEM(expectationmaximization)method[38] isusedtotraintheparametersofHMM.221 HMMbasedindividualhandgesturerecogni

tion WepreprocesstherawsensordatatoextractthefeaturesforgestureclassificationinthelowerlevelHMM,whichhastwophases:thetrainingphaseand

Page 6: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 42    山 东 大 学 学 报 (工 学 版) 第40卷 

therecognitionphase.Eachrawsensordataisa6componentvector:

u=[ωx,ωy,ωz,ax,ay,az]T

 Alowpassfilterisusedtoremovehighfrequencynoise.Then,aslidingwindowof20pointsofthe3axisacceleration(about133msinthetimedomain)isusedtocalculatethetimeaverageinordertoremovetheDCcomponentsandgeneratethedeviationvector[dx,dy,dz]

T.WeapplytheFFTonthisvectortoanalyzethepowercomponentsinthefrequencydomainandfindthefundamentalfrequencyofthegesture.Therearefourstepsinthetrainingphase. Step1:Findthestrokeduration.Inthetrainingphase,thehumansubjectneedstorepeatthesamegestureseveraltimestogetthematricesforonesetofHMMparameters.Inordertofindthestrokedurationofthegesture,theFFTisappliedtothedeviationvector[dx,dy,dz]

T.Thefrequencywiththemaximumpoweramongthex,y,andzischosenasthefrequencyofthegesture,fromwhichwecangetthestrokedurationofthisgestureforfurtheruse. Step2:Quantifythevectorsintoobservationsymbols.TheKmeansclusteringisappliedonthe6Dvectorsutogetthepartitionvalueforeachvectorandalsoasetofcentroidsforclusteringthedataintoobservationsymbolsintherecognitionphase. Step3:SetuptheinitialHMMparameters.Setthenumberofstatesinthemodel,thenumberofdistinctobservationsymbolsperstateandtheinitialvalueofλ=(A,B,π)foriteration. Step4:IterateforEM.TheE(Expectation)stepisthecalculationoftheauxiliaryfunctionQ(λ,珔λ)[13],andtheM(Maximization)stepisthemaximizationofthelikelihoodover珔λ.Thisprocessisiterateduntilthelikelihoodapproachesasteadyvalue. Figure3showstheflowchartforindividualhandgesturerecognition.Thedatapreprocessingisappliedonthedatawindowandthecentroidsaretrainedtoquantifythevectorsintoobservablesymbols.Aslidingwindowof1secondmovesalongthedatasequenceandthelikelihoodundereachsetofHMMparametersisestimated.Wechoosethemodelwhichachievesthemaximumlikelihoodtobetherecognizedtype.Thus,thisHMMbasedrecognitiongivesaseriesofdecisionsforthesegmentedgesture.

Fig.3 TheflowchartoftheHMMbasedindividualhandgesturerecognition

 Next,themajorityvotingisappliedontheoutputofthelowerlevelHMMsforthesegmentedgesturetoproducethedecision,whichisalsotheobservationsymbolvalueintheupperlevelHMM.AsshowninFigure4,theslidingwindowhasalengthof150datapoints(onesecond)andmovesbyastepof20datapoints.Foreachslidingwindow,themodelwiththemaximumlikelihoodistheresult.Therefore,inonegesturesegment,themajorityvotingisappliedontheresultsofallthewindowstoproduceagesturerecognitiondecision.

Fig.4 Themovingofslidingwindowsinonesegmentofagesture

22.2 Contextbasedhandgesturerecognition Inthepreviouspart,individualhandgesturesarerecognizedwithouttheknowledgeofthecontext.Inthissection,weuseanHHMM toconsiderthesequentialconstraintsamongthegestures.TheHHMMisageneralizationofthesegmentmodelwhereeachsegmenthassubsegments.Figure5illustratesthebasicideaofanHHMM.A timeseriesishierarchicallydividedintosegments,whereS1irepresentsthestateattheupperlevelHMMandS2irepresentsthestateatthelowerlevelHMM.AblockofS2iisthestatesequenceofthesubHMMsofS1i.

Page 7: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 43     

Fig.5 ThearchitectureofanHHMM

 Wedefine“context”asthesequentialconstraintsamongdifferenttypesofgestures.Figure6showsthetransitionoftheupperlevelHMM.Itisadiscrete,firstorderHMMwithfivestatesandfiveobservationsymbols.TheupperlevelHMMcanbedescribedasasequenceofcommandsandatanytimeitisinoneofasetofN(N=5)distinctstates:S1,S2,…,S5.Itundergoesachangeofstateaccordingtoasetofprobabilitiesassociatedwiththestate.Forexample,thesamecommandislesslikelytobesenttwiceconsecutively,andwhenthepreviouscommandis“goaway”,thenextonehasasmallprobabilityofbeing“gofetching”.Wedenotethetimeinstantsassociatedwiththestatechangeask=1,2,…,Nandthekthactualstateasqk.Thefollowingprobabilisticdescriptionlinksthecurrentandtheprecedingstates[6]:aij=P[qk=Sj|qk-1=Si],for1≤i,j≤N,and∑

jaij=1,

 whereNisthenumberofdistinctstates.

Fig.6 ThetransitionoftheupperlevelHMMthatconsidersthecontextinformation

 Theinitialstatedistributionrepresentstheprobabilitydistributionofthefirstcommand,whichisdefinedas:π=P[q1=Si,(i=1,2,…,N)].AnotherelementoftheupperlevelHMM istheobservationsymbolprobability distribution in state Sj: bj(k) =P[Ok|qt=Sj].bjshowshowlikelythiscommandwillberecognizedasthedifferentobservationsymbols,whereOkrepresentsthedecisionmadebythelowerlevelHMM.

 ForagivenobservationsequencewithalengthofT,theViterbialgorithm isusedattheupperlevelHMMtofindthesinglebeststatesequenceQ={q1q2…qT},whichrepresentsthemostlikelyunderlyingcommandsequence,forthegivenobservationsequenceO={O1O2…OT}.Inthisway,someerrorsinthelowerlevelHMMcanbecorrectedbytheupperlevelHMM.

3 Humandailyactivityrecognition

 Inthissection,wewilldiscussthehumandailyactivityrecognitionthroughmultisensorfusion.Twoinertialsensorsareattachedtoonefootandthewaistofthehumansubject,respectively.Therearetwostepsinthedailyactivityrecognition.Inthefirststep,thefusionofthedatafrom thetwowearablesensorsgeneratescoarsegrained classification forthreetypesofhumanactivities:zerodisplacementactivities,transitionalactivities,andstrongdisplacementactivities.Inthesecondstep,eitheraheuristicdiscriminationmoduleisusedforfinegrainedclassificationofzerodisplacementactivitiesandtransitionalactivities,oranHMMbasedrecognitionalgorithmisusedforthefinegrainedclassificationofstrongdisplacementactivities.Inthisway,thecoarsegrainedclassificationcontrolsthedirectionofthedataflowtotriggereithertheheuristicdiscriminationmoduleortheHMMbasedrecognitionmodule.Thismechanismcansavethecomputationtimeandenhancetheefficiencyoftherecognitionalgorithm. AsshowninFigure7,rawsensordata(accelerationandangularvelocity)areprocessedtoobtainthefeatures(mean,varianceandcovarianceofthe3Dangularvelocityand3Dacceleration),whicharefedintotheneuralnetworksNNfandNNw forfootandwaist,respectively.Themultisensorfusionbasedcoarsegrainedclassificationmoduledeterminesthenextstep,theheuristicdiscriminativemoduleortheHMMmodule,tobeappliedinthefinegrainedclassificationmodule.31 Coarsegrainedclassification Thefollowingactivitiesareconsideredfortheoutputofthesensorfusion:(1)AZ=zerodisplacementactivities:standing,sitting,andsleeping;(2)AT=

Page 8: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 44    山 东 大 学 学 报 (工 学 版) 第40卷 

transitionalactivities:sittingtostanding,standingtositting,levelwalkingtostairwalking,stairwalkingtolevelwalking,lyingtositting,andsittingtolying;(3)AS=strongdisplacementactivities:walkinglevel,walkingupstairs,walkingdownstairs,andrunning.Moreactivitiescanberecognizedwithadditionalsensors.Forexample,cookingandwatchingTVcanberecognizedwhentheenvironmentalaudioinformationisrecorded.TwoneuralnetworksNNfandNNwaredesignedforthedatafromthefootandthewaist,respectively.Theneuralnetworkscategorizethedataintothreetypes:(1)stationary,(2)transitional,and(3)cyclic.Theoutputsoftheneuralnetworksarefedintothefusionmodule.

Fig.7 Theoverviewofthehumandailyactivityrecognitionalgorithm.

 ThefusionmoduleintegratestheindividualtypesoffootandwaistactivitiesandcategorizesthehumanactivitiesaccordingtotherulesinTable1:(1)zerodisplacementactivities:A∈AZiffAw=stationary;(2)transitional:A∈ATiff(Af=transitionalandAw=transitional)or(Af=stationaryandAw=transitional);(3)strongdisplacementactivities:A∈ASiffAf=cyclicandAw=cyclic.Allothercombinationsoffootandwaistactivitiesareconsideredasrareactivitiesandwedonotconsidertheminthispaper.

Table1 Sensorfusionrules

Fusion RulesFootsensorAf

StationaryTransitional CyclicWaist Stationary AZ AZ AZsensor Transitional AT AT —

Aw Cyclic — — AS

32 Finegrainedclassification Tofurtherdistinguishthestationaryactivities(such

assittingandstanding)andthetransitionalactivities(suchassittingtostandingandstandingtositting),aheuristicdiscriminationmodulewillbeappliedtoconsiderthepreviousstationaryactivityanddecidethetypeofthecurrenttransitionalactivity.Forexample,whenthedetectedpreviousactivityissitting,afteratransitionalactivity,thefollowingactivityisstationary.Thenweuseadiscriminativemodeltotestwhetherthedirectionofthetrunkisverticalorhorizontal:ifitisvertical,thenthecurrentactivityisstandingandtheprevioustransitionalactivityissittingtostanding;otherwise,thecurrentactivityislyingandtheprevioustransitionalactivityissittingtolying. AnHMMbasedrecognitionalgorithmisappliedtofurtherdeterminethetypesofthestrongdisplacementactivities,whichrecognizesthepatternsofthecontinuoustimeseriesofdata.Thedetailedalgorithmissimilartotheoneusedinthehandgesturerecognition.

4 Experimentalresults

 Inbothexperimentsforhandgestureandactivityrecognition,theNNandHMMsaretrainedofflinebeforetheyareusedintherecognitionphase.Theofflinecomputationaltimeforonehumansubjectisabout10secondsfortheneuralnetworkand60secondsforthelowerlevelHMMbasedonacomputationserverwiththeCPUofIntelCore2,213GHzand3GBmemory.Weexperiencenodecisiondelaysduringthetestingphaseafterallthemodelsaretrained.Hereweshowtheresultsforhandgesturerecognitionandhumanactivityrecognition,respectively.41 Handgesturerecognition Inthissection,theexperimentsetupandprocessforhandgesturerecognitionareintroducedandtheresultsaredescribed.411 Experimentsetupandprocess Forhandgesturerecognition,weuseaninertialsensornIMUfromMEMSenseLLC[39],whichprovides3Dacceleration,angularvelocity,magneticdata,andtemperaturedataatasamplingrateof150HZ.TheprototypeofthewearablesensorsystemforhandgesturerecognitionisshowninFigure8.TheuIMUsensorisconnectedtoaPDA throughaRS422/RS232serialconverter,andthePDAsendsthedata

Page 9: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 45     

toadesktopcomputerthroughWiFi.ThedatacollectionprogramforthePDAiswritteninVisualC++andtherecognitionalgorithmiswritteninMATLAB.Intheexperiments,wedefinethefollowingfivegesturesasshowninFigure9: Type1:wavinghandbackwardfor“comehere”; Type2:wavingleftandrightfor“goaway”; Type3:pointingforwardfor“gofetching”; Type4:turningclockwisefor“sitdown”,and Type5:turningcounterclockwisefor“standup”.

Fig.8 Theprototypeofthewearablesensorsystemforhandgesturerecognition

Fig.9 Thehandgesturesforthefivecommands

 Wehave3experimentersandhaverecorded30setsofdatafortrainingand30setsoftestingsequence,eachofwhichisasequenceconsistingof20gestures.Intheexperiments,wefollowedthreesteps. Step1:Repeatedlyperformgesturetype1for15timesandtakea5secondbreak.Continueperformingtheresttypesfollowingthesamepatternuntiltype5isdone.Labeleachgestureandrecorddataonafile. Step2:Performasequenceof20gestureswithabreakofatleast3secondsbetweengestures.Thegesturesmimicarealworldscenarioofinteractingwitharobot. Step3:Processthetrainingdataandtestdata.First,traintheneuralnetworktodistinguishgesturesfromdailynongesturemovements.Second,useeachblockoftrainingdatatotrainthelowerlevelHMMs.

Totradeoffthecomputationalcomplexitywithefficiencyandaccuracy,thenumberofstatesinthelowerlevelHMMis20,andthenumberofdistinctobservationsymbolsis20.Third,usethetrainedHMMstorecognizeindividualcommandsinthetestdata.Theoutputofeachtestisasequenceofrecognizedcommands.Finally,theViterbialgorithmisusedtoproducethemostlikelyunderlyingcommandsequencebasedonthegivenupperlevelHMMparameters.412 EvaluationoftheNNbasedsegmentation ThefirstandthesecondlayersoftheneuralnetworkaretrainedusingMATLAB NeuralNetworkToolbox[40].Theinitialvaluesoftheweightsandbiasesarerandomlyselected.Differentinitialvaluesleadtodifferentperformances.Iftheperformancedoesnotreachthegoal,thetrainingphasehastoberestarted.Figure10showsgoodandbadtrainingresultsoftheneuralnetwork.Onlywhentheperformancereachesthegoal,asshowninthelefthalfofFigure10,theneuralnetworkachievesadequateaccuracy.However,ifthetraininggoalhasnotbeenmet,therearemoreerrorsinthesegmentationascanbeseenintherighthalfofFigure10.413 Gesturerecognitionresult

Theparameters(A,B,π)oftheupperlevelHMM areobtainedbyobservingthehumansubjectinteractingwiththerobotforasustainedperiodoftime.ThetransitionprobabilitymatrixAisobtainedbyobservingtheuser’slongtermgesturesequenceandcalculatingthetransitionprobabilitybetweentwogestures,whichcanbedifferentfrompersontoperson.Forexample,thetransitionmatrixAforoneoftheexperimenteris:A={aij}=0.0085 0.4927 0.0990 0.3991 0.00070.5849 0.3982 0.0085 0.0061 0.00230.4959 0.4937 0.0057 0.0035 0.00120.0026 0.2974 0.3984 0.0050 0.29660.0079 0.2963 0.3946 0.2988 0.

0024

 TheobservationsymbolprobabilitydistributionmatrixBisequivalenttotheaccuracymatrixofslidingwindowsofeachindividualgesturebeforevotinginthelowerlevelHMM,whichcanbeobtainedfromtheindividualgesturerecognition.Forexample,thematrixBforoneoftheexperimenteris:

Page 10: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 46    山 东 大 学 学 报 (工 学 版) 第40卷 

Fig.10 TheperformanceoftheNNbasedgesturespotting

Fig.11 HMMtrainingphaselikelihoodvsiterationtimes

B={bij}=06434 03047 00122 00384 0001300137 09610 00074 00123 0005600024 01032 08846 00052 0004601450 00575 00428 07546 00001

00950 02414 00055 00090 06491

 Wesettheinitialstatedistributiontobeauniformdistributiontoreflectthefactthatnopreferencewillbegiventoaspecificcommand. IntheHMMtrainingphase,newparametersarerecalculatedbythereestimationformulae[38]ateachiteration.Then,thelikelihoodofthedataiscalculated

withthenewlyestimatedparameters.Figure11showsthattheloglikelihoodvaluesofthedataofdifferentgesturesvs.theiterationnumber.Whenthenumberofiterationisgreaterthan15,thelikelihoodconvergestoastablevalue.Therefore,inourexperiments,wechose15iterations. Figure12showstherecognitionresultsofonesetoftestingdata.In(a),the3Daccelerationfromthesensorindicates20gestures.In(b),theneuralnetworkhelpstospotthegestures.In(c),whenthelowerlevelHMMsareapplied,therearesomeerrorsatthepointofa,b,c,d,e,andf.In(d),afterconsideringthecontextinformation,theerrorsatthepointofb,c,andfarecorrectedbytheBayesianfilteringintheupperlevel.Forthevideoclipsoftheexperiments,pleasegotothefollowinglink: http://ascc.okstate.edu/projectschun.html Theperformanceofrecognitionisevaluatedbycomparingtheresultwiththegroundtruth.TheclassificationaccuracyoftheHMMbasedandHHMMbasedrecognitionislistedinTables2and3,respectively.Thevaluesinboldarethepercentagesofthecorrect

Page 11: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 47     

classificationscorrespondingtothespecificgestures.Othernumbersindicatethepercentagesofwrongclassifications.ItisobviousthattheperformanceofHHMMismuchbetterthanthatofindividualHMMsonly.

Fig.12 TheresultsoftheneuralnetworkandhiddenMarkovmodels

Table2 TheaccuracyofHMMbasedrecognitionGroundtruth

Decisiontype1 2 3 4 5

Accuracy

1 0.89290.03570.07140.00000.0000 0.89292 0.10340.80760.03450.00000.0345 0.82463 0.12900.09680.77420.00000.0000 0.77424 0.64520.03230.06450.25810.0000 0.25815 0.07600.00000.07600.00000.8462 0.8462

Table3 TheaccuracyofHHMMbasedrecognitionGroundtruth

Decisiontype1 2 3 4 5

Accuracy

1 0.92860.03570.03570.00000.0000 0.92862 0.06900.86210.00000.03450.0345 0.86213 0.06060.06060.87880.00000.0000 0.87884 0.16130.06450.03230.74190.0000 0.74195 0.07690.00000.07690.00000.8462 0.8462

42 Humandailyactivityrecognition Inthissection,theexperimentsetupandprocessfordailyactivityrecognitionareintroducedandtheresultsaredescribed.421 Experimentsetupandprocess Forhumandailyactivityrecognition,weusetwoinertialsensors.TheexperimentsetupisshowninFigure13.BothinertialsensorsareconnectedtoaPDAthroughRS422/RS232serialconverters.ThePDAsendsdatatoadesktopcomputerthroughWiFi.Inourexperiments,regulardailyactivitieswereperformed:standing,sitting,walkinglevel,walkingupstairs,walkingdownstairs,running,sleeping,etc.Werecorded20setsofdataforthetrainingpurpose

and30setsforthetestingpurpose.

Fig.13 Theexperimentsetupforhumandailyactivityrecognition

422 Evaluationoftheneuralnetworksforcoarsegrainedclassification

 TheneuralnetworksNNwforthewaistandNNfforthefootaretrainedseparatelywiththedatacollectedbythecorrespondingsensors.Figure14showsgoodtrainingresultsoftheneuralnetwork.Whentheperformancereachesthegoal,theneuralnetworkcanachieveadequateaccuracyandonlyafewerrorsareobservedaroundtheedgesoftheblocks.423 Evaluationofthefinegrainedclassification Basedontheresultsofthecoarsegrainedclassification,theheuristicdiscriminationmoduleortheHMMbasedrecognitionmodulewillbeappliedforfinegrainedclassification.Ourtestsshowthattheaccuracyoftheheuristicdiscriminationmoduleisveryhigh(983%).TheHMM moduleisswitchedonwhenthereisastrongdisplacementactivity.Aslidingwindowmovesalongthesegmenteddatawithalengthof1secondandsteplengthof02second.Theoutputisasequenceofclassificationdecisions.Then,amajorityvotingfunctionfollowstoproduceasingledecisionforeachwindow. Figure15showstheaccelerationofthewaistsensor(thetopfigure),andtherecognitionresultscomparedwiththegroundtruth(thebottomfigure).Inthetopfigure,the3Daccelerationfromthesensorindicateswhencyclic,transitional,andstationaryactivitiesappear.Inthebottomfiguretherearesomemisclassificationsindicatedinthecircledareas.ThetwocirclesonthebottomfigureshowthattheerrorsarecausedbytheHMMbasedrecognitionalgorithmfor

Page 12: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 48    山 东 大 学 学 报 (工 学 版) 第40卷 

thestrongdisplacementactivities.TheHMMbasedrecognitionresultsonthetestingdataafterthemajori

tyvotingfunctionareshowninTable4.TheclassificationaccuracyisshowninTable4.

Left:theperformancegoalofthefootsensorismet,accuracy=98.40%;Right:theperformancegoalofthewaistsensorismet,accuracy=94.61%.

Fig.14 ThetrainingresultsoftheNNbasedsegmentationfordailyactivityrecognition

Fig.15 Thefinalresultsofthedailyactivityclassification

Table4 Classificationaccuracyobtainedfromthetestingdata

Activitytype

HMMdecisiontype

Walking Walkingdownstairs

Walkingupstairs Running

Accuracy

Walking 0.9030 0.0581 0.0360 0.0029 0.0930

Walkingdownstairs0.0478 0.9250 0.0270 0.0020 0.9250

Walkingupstairs 0.0759 0.0289 0.8915 0.0037 0.8915

Running 0.0901 0.0120 0.0278 0.8702 0.8701

Page 13: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 第3期 SHENGWeihua,etal:Awearablecomputingapproachforhandgestureanddailyactivityrecognitioninhumanrobotinteraction 49     

5 Conclusions

 Inthispaper,weintroducedasmartassistedlivingsystemforelderlypeople,patients,andthedisabled.TheroleofrobotsisthecomputationplatformandtheserviceproviderintheSAILsystem.Thecompanionrobotcaninferthehumanintentionsandconditionsfromthesensordataandmakecorrespondingreactions.TorealizenaturalHRIinsuchaSAILsystem,weproposed(1)aneuralnetworkbasedgesturespottingandanHHMMbasedhandgesturerecognitionalgorithmforelderlypeoplewhosufferfromproblemswithspeech,and(2)amultisensorfusionbasedhumandailyactivityrecognitionalgorithm.BothofthemarebasedontheneuralnetworksandthehiddenMarkovmodels.Comparedtoothersimilarsolutions,ouralgorithmscanrealizeautonomousrecognitionofhandgesturesanddailyactivitiesinrealtime.ThealgorithmsarelightweightandresourceawaresincetheHMMmodulesaretriggeredonlywhenthereisagestureinhandgesturerecognitionorwhenthereisastrongdisplacementactivityinhumandailyactivityrecognition.Thereforethecomputationalcostisreduced,whichisimportantforembeddedcomputingsystems.Furthermore,forhandgesturerecognition,anHHMMisusedtomodelthesequentialconstraintsinthegestures,whichincreasestherecognitionaccuracy.Fordailyactivityrecognition,themultisensorfusionschemecanincreasethetypesofdailyactivitiestoberecognized.Inthefuture,wewillmodifyandimplementtherecognitionalgorithmsonarealrobotinrealtime.

References:

[1]BabyboomercaretakerCo.Ltd.Babyboomersagingneeds[EB/OL].[20081022].www.babyboomercaretaker.com/babyboomer/index.html.

[2]HAIGHKZ,YANCOH.Automationascaregiver:asurveyofissuesandtechnologies[C]//ProceedingsoftheAAAI02WorkshoponAutomationasCaregiver.Edmonton,Canada:[s.n.],2002:3953.

[3]HAASCHA,HOHENNERS,HUWEIS,etal.Bironthebielefeldrobotcompanion[C]//ProcIntWorkshoponAdvancesinServiceRobots.[S.l]:[s.n.],2004:2732.  

[4]FRITSCHJ,KLEIEHAGENBROCKM,HAASCHA.,etal.AflexibleinfrastructureforthedevelopmentofarobotcompanionwithextensibleHRIcapabilities[C]//ProceedingsoftheIEEEInternationalConferenceonRoboticsandAutomation.Barcelona,Spain:IEEEPress,2005:34193425.

[5]YANCOHA,DRURYJL.Classifyinghumanrobotinteraction:anupdatedtaxonomy[C]//ProceedingsofIEEEInternationalConferenceonSystems,ManandCybernetics.Hague,Netherlands:IEEEPress,2004:28412846.

[6]ZHUC,SUNW,SHENGW.Wearablesensorsbasedhumanintentionrecognitioninsmartassistedlivingsystems[C]//ProceedingsofIEEEInternationalConferenceonInformationandAutomation.Zhangjiajie,China:IEEEPress,2008:954959.

[7]ZHUC,CHENGQ,SHENGW.HumanintentionrecognitioninsmartassistedlivingsystemsusingahierarchicalhiddenMarkovmodel[C]//ProceedingsofIEEEInternationalConferenceonAutomationScienceandEngineering.Arlington,USA:IEEEPress,2008:253258.

[8]YANGGZ,YACOUBM.Bodysensornetworks[M].Berlin,Germany:Springer,2006.

[9]ZigbeeAlliance.Zigbeetelecommunicationservices[EB/OL].[20070805].http://www.zigbee.org/en/index.asp.

[10]MORRISSEYW,ZAJICEKM.Rememberinghowtousetheinternet:aninvestigationintotheeffectivenessofvoicehelpforolderadults[C]//ProceedingsofHCIInternational,NewOrleans,USA:[s.n.]:700704.

[11]CZAJASJ.Agingandtheacquisitionofcomputerskills[M]//ROGERSW A,ARTHURDF,WALKERN.Agingandskilledperformance:Advancesintheoryandapplications.NewYork:PsychologyPress,1996:201220.

[12]YANCOHA,DRURYJL.Ataxonomyforhumanrobotinteraction[C]//ProceedingsoftheAAAI2002FallSymposium onHumanRobotInteraction.MenloPark,California:AAAIPress,2002:111119.

[13]RABINERLR.Atutorialonhiddenmarkovmodelsandselectedapplicationinspeechrecognition[J].ProcoftheIEEE,1989,77(2):267296.

[14]HAGANM T,DEMUTHHB,BEALEM H.Neuralnetworkdesign[M].Chicago:PWSPublishingCompany,1996.

[15]MITRAS,ACHARYAT.Gesturerecognition:asurvey[J].IEEETransonSystems,ManandCybernetics:PartC,2007,27(2):311324.

[16]LEEC,XUY.Online,interactivelearningofgesturesforhuman/robotinterface[C]//ProceedingsoftheIEEEInternationalConferenceonRoboticsandAutomation,

Page 14: 人机交互中基于可穿戴式计算的 手势和活动辨识chunzhu.yolasite.com/resources/paper/20100307.pdf · Fig.5 ThearchitectureofanHHMM Wedefine“context”asthesequentialconstraintsa

 50    山 东 大 学 学 报 (工 学 版) 第40卷 

volume4.Albuquerque,NM:IEEEPress,1996:29822987.

[17]VRLOGICLLC.CyberGlove[EB/OL].[20081020].http://vrlogic.com/html/immersion/cyberglove.html.

[18]HUYNHT,SCHIELEB.Analyzingfeaturesforactivityrecognition[C]//Proceedingsofthe2005JointConferenceonSmartObjectsandAmbientIntelligence:InnovativeContextawareServices:UsagesandTechnologies.Grenoble,France:theACMPress,2005:159163.

[19]OKAR.Spottingmethodforclassificationofrealworlddata[J].TheComputerJournal,1998,41(8):559565.

[20]RAMAMOORTHYA,VASWANIN,CHAUDHURYS,etal.Recognitionofdynamichandgestures[J].PatternRecognition,2003,36(9):20692081.

[21]LENMANS,BRETZNERL,THURESSONB.Computervisionbasedhandgestureinterfacesforhumancomputerinteraction,technicalreportTRITANAD0209,CID172[R].Stockholms,Sweden:NADA,DeptartmentofNumericalAnalysisandComputerScience,2002.

[22]LEEHK,KIMJH.Anhmmbasedthresholdmodelapproachforgesturerecognition[J].IEEETransactionsonPatternAnalysisandMachineIntelligence,1999(21):961973.

[23]KEHAGIASA,FORTINV.Timeseriessegmentationwithshiftingmeanshiddenmarkovmodels[J].NonlinProcessesGeophys,2006(13):339352.

[24]AMINIANK,ROBERTP,BUCHSEREE,etal.Physicalactivitymonitoringbasedonaccelerometry:validationandcomparisonwithvideoobservation[J].MedicalandBiologicalEngineeringandComputing,1999(3):304308.

[25]NAJAFIB,AMINIANK,PARASCHIVIONESCUA,etal.Ambulatorysystemforhumanmotionanalysisusingakinematicsensor:Monitoringofdailyphysicalactivityintheelderly[J].IEEETransonBiomedicalEngineering,2003,50(6):711723.

[26]MITCHELLT.Decisiontreelearning[J].MachineLearning,1997(11):5278.

[27]LOWDD,DOMINGOSP.Naivebayesmodelsforprobabilityestimation[C]//Proceedingsofthe22ndInternationalConferenceonMachineLearning.New York,USA:ACMPress,2005.

[28]LESTERJ,CHOUDHURYT,KERNN,etal.Ahybriddiscriminative/generativeapproachformodelinghuman

activities[C]//ProceedingsoftheInternationalJointConferenceonArtificialIntelligence(IJCAI,2005).Edinburgh,Scotland:ProfessionalBookCenter,2005.766772.

[29]MANTYJARVIJ,HIMBERGJ,SEPPANENT.Recognizinghumanmotionwithmultipleaccelerationsensors[C]//2001IEEEInternationalConferenceonSystems,Man,andCybernetics.Tucson,USA:IEEEPress,2001:747752.

[30]SMITHLI.Atutorialonprincipalcomponentsanalysis[EB/OL].[20031020].http://kybele.psych.cornell.edu/edelman/Psych465Spring2003/PCAtutorial.pdf.

[31]HYVARIENEA,KARHUNENJ,OJAE.Independentcomponentanalysis[M].SanFrancisco,USA:JohnWiley&Sons,2001.

[32]DEVAULRW,DUNNS.Realtimemotionclassificationforwearablecomputingapplications[R].Technicalreport.MIT,USA:MITMediaLaboratory,2001.

[33]TITTERINGTOND,SMITHA,MAKOVU.Statisticalanalysisoffinitemixturedistributions[M].SanFrancisco,USA:JohnWiley&Sons,1985.

[34]FREUNDY.Boostingaweaklearningalgorithmbymajority[C]//ProceedingsoftheThirdAnnualWorkshoponComputationalLearning Theory.Rochester, NewYork:MorganKaufmannPublishers,1990:202216.

[35]BAUMLE,EGONJA.AninequalitywithapplicationstostatisticalestimationforprobabilisticfunctionsofaMarkovprocessandtoamodelforecology[J].BullAmerMeteorolSoc,1967(73):360363.

[36]BUAMLE,SELLGR.Growthfunctionsfortransformationsonmanifolds[J].PacJMath,1968,27(2):211227.

[37]VITERBIAJ.Errorboundsforconvolutionalcodesandanasymptoticallyoptimaldecodingalgorithm[J].IEEETransInformatTheory,1967(13):260269.

[38]DEMPSTERAP,LAIRDNM,RUBINDB.Maximumlikelihoodfrom incompletedataviatheem algorithm[J].JRoyStatSoc,1977,39(1):138.

[39]MEMSenseLLC.Produces[EB/OL].[20090524].http://www.memsense.com/,2009.

[40]MATLAB LLC.Neuralnetworktoolbox[EB/OL].[20090208].http://www.mathworks.com/products/neuralnet,2009.

(编辑:陈斌)