110
Society for Music Perception and Cognition August 11-14, 2011 Eastman School of Music of the University of Rochester Rochester, NY Welcome Dear SMPC 2011 attendees, It is my great pleasure to welcome you to the 2011 meeting of the Society for Music Perception and Cognition. It is a great honor for Eastman to host this important gathering of researchers and students, from all over North America and beyond. At Eastman, we take great pride in the importance that we accord to the research aspects of a musical education. We recognize that music perception/cognition is an increasingly important part of musical scholarship‐‐and it has become a priority for us, both at Eastman and at the University of Rochester as a whole. This is reflected, for example, in our stewardship of the ESM/UR/Cornell Music Cognition Symposium, in the development of several new courses devoted to aspects of music perception/cognition, in the allocation of space and resources for a music cognition lab, and in the research activities of numerous faculty and students. We are thrilled, also, that the new Eastman East Wing of the school was completed in time to serve as the SMPC 2011 conference site. We trust you will enjoy these exceptional facilities, and will take pleasure in the superb musical entertainment provided by Eastman students during your stay. Welcome to Rochester, welcome to Eastman, welcome to SMPC 2011‐‐we're delighted to have you here! Sincerely, Douglas Lowry Dean Eastman School of Music

Society for Music Perception and Cognition

  • Upload
    buithuy

  • View
    223

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Society for Music Perception and Cognition

SocietyforMusicPerceptionandCognition

August11­14,2011EastmanSchoolofMusic

oftheUniversityofRochesterRochester,NY

Welcome

DearSMPC2011attendees,Itismygreatpleasuretowelcomeyoutothe2011meetingoftheSocietyforMusicPerceptionandCognition.ItisagreathonorforEastmantohostthisimportantgatheringofresearchersandstudents,fromalloverNorthAmericaandbeyond.AtEastman,wetakegreatprideintheimportancethatweaccordtotheresearchaspectsofamusicaleducation.Werecognizethatmusicperception/cognitionisanincreasinglyimportantpartofmusicalscholarship‐‐andithasbecomeapriorityforus,bothatEastmanandattheUniversityofRochesterasawhole.Thisisreflected,forexample,inourstewardshipoftheESM/UR/CornellMusicCognitionSymposium,inthedevelopmentofseveralnewcoursesdevotedtoaspectsofmusicperception/cognition,intheallocationofspaceandresourcesforamusiccognitionlab,andintheresearchactivitiesofnumerousfacultyandstudents.Wearethrilled,also,thatthenewEastmanEastWingoftheschoolwascompletedintimetoserveastheSMPC2011conferencesite.Wetrustyouwillenjoytheseexceptionalfacilities,andwilltakepleasureinthesuperbmusicalentertainmentprovidedbyEastmanstudentsduringyourstay.WelcometoRochester,welcometoEastman,welcometoSMPC2011‐‐we'redelightedtohaveyouhere!Sincerely,DouglasLowryDeanEastmanSchoolofMusic

Page 2: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2AcknowledgementsMonetarysupportgraciouslyprovidedby:EastmanSchoolofMusicandDeanDouglasLowryEastmanMultimedia,Cognition,andResearchComputingLabUniversityofRochesterCommitteeforInterdisciplinaryStudies: UCISMusicCognitionClusterURDepartmentofBrainandCognitiveSciencesURCenterforLanguageSciencesURProgramonPlasticity,Development,andCognitiveNeuroscienceURHumanitiesProjectNationalAssociationofMusicMerchants(NAMM)Additionalandin­kindsupportprovidedby:EastmanConcertOffice(AndrewGreen,SerinKimHong)EastmanTechnologyandMediaProductionOffice(HelenSmith,DominickFraczek)EastmanDepartmentsofMusicTheoryandOrgan&HistoricalInstrumentsChristChurchEpiscopalandtheEastman‐RochesterOrganInitiativeURConferenceandEventsOffice(CeliaPalmer,DeniseSoudan)StudentworkersfromtheEastmanSchoolofMusicandUniversityofRochesterBrain&CognitiveSciencesDepartmentSMPC2011ProgrammingCommitteeErinHannon,UniversityofNevadaatLasVegas J.DevinMcAuley,MichiganStateUniversityCarolKrumhansl,CornellUniversity PeterPfordresher(chair),UniversityatBuffaloSUNYJustinLondon,CarletonCollege FrankRusso,RyersonUniversityElizabethMargulis,UniversityofArkansas MichaelSchutz,McMasterUniversityPeterMartens,TexasTechUniversitySMPC2011LocalEventsCommitteeElizabethWestMarvin,EastmanSchoolofMusic DavidTemperley,EastmanSchoolofMusicSMPCExecutiveBoardTonyaBergeson(Secretary),IndianaUniversity AniruddhPatel(President),TheNeurosciencesInstituteAndreaHalpern(PresidentElect),BucknellUniversity FrankRusso,RyersonUniversityPetrJanata,UniversityofCaliforniaatDavis DavidTemperley,EastmanSchoolofMusicScottLipsomb(Treasurer),UniversityofMinnesota FinnUpHam(StudentMember),NorthwesternUniversityElizabethMargulis,UniversityofArkansasAboutourKeynoteSpeaker:NinaKrausProfessorKrausistheHughKnowlesProfessor(CommunicationSciences;Neurobiology&Physiology;Otolaryngology)atNorthwesternUniversity,whereshedirectstheAuditoryNeuroscienceLaboratory.Dr.Krausinvestigatesbiologicalbasesofspeechandmusic.Sheinvestigateslearning‐associatedbrainplasticitythroughoutthelifetimeindiversepopulations‐‐normal,expert(musicians),andclinical(dyslexia;autism;hearingloss)‐‐andalsoinanimalmodels.Inadditiontobeingapioneeringthinkerwhobridgesmultipledisciplines(aging,development,literacy,music,andlearning),Dr.Krausisatechnologicalinnovatorwhorootsherresearchintranslationalscience.FormoreonProfessorKraus'sresearch,see:http://www.brainvolts.northwestern.edu

Page 3: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3

ProgramoverviewTalksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow(numbersalsodisplayednexttoeachabstracttitle).Sessionchairslistedaftersessiontitlesbylastname.Start End

Thursday,11August,2011

8:00 17:00 Registration(WolkAtrium)

8:00 9:00 ContinentalBreakfast(WolkAtrium)

9:009:209:4010:00

9:209:4010:0010:20

Absolutepitch(Lipscomb)HatchRecitalHallSharma&Levitin(1)Loui,Zamm,&Schlaug(2)Marvin&Newport(3)Weisman,Balkwill,Hoeschele,Mosicki,&Sturdy(4)

Evolution(Martens)ESMRoom120Chan,McGarry,Corpuz&Russo(5)Parncutt(6)Perlovsky(7)Savage,Rzeszutek,Grauer,Wang,Trejaut,Lin,&Brown(8)

10:20 10:40 Break

10:40

11:0011:2011:40

11:0011:2011:4012:00

Emotion1(Margulis)HatchRecitalHallTrenck,Martens,&Larsen(9)McGarry&Russo(10)Temperley&Tan(11)Martens&Larsen(12)

Cross­modaleffects(Repp)ESMRoom120Livingstone,Palmer,Wanderley&Thompson(13)Marin,Gingras&Bhattacharya(14)Hedger,Nusbaum,&Hoeckner(15)Krumhansl&Huang(16)

12:00 14:00 LunchBreak

14:00

14:2014:4015:0015:20

14:20

14:4015:0015:2015:40

Development(Cuddy)HatchRecitalHallAdachi(17)Trehub,Zhou,Plantinga,&Adachi(18)Patel,Iversen,Brandon,&Saffran(19)Corrigall&Trainor(20)Filippa&Gratier(21)

Timbre(Hasegawa)ESMRoom120Chiasson,Traube,Lagarrigue,Smith,&McAdams(22)Tardieu&McAdams(23)Kendall&Vassilakis(24)Lembke&McAdams(25)Paul&Schutz(26)

15:40 16:00 Break

16:00 18:30

PlenarysessionPresident’saddress,SMPCachievementaward,andKeynoteLecture

KeynoteLecture:CognitiveFactorsShapeBrainNetworksforAuditorySkillsNinaKraus

(HatchRecitalHall;additionalseatinginESMRoom120)

18:30

OpeningReceptionCo­sponsoredbytheEastmanSchoolofMusicandNationalAssociationofMusicMerchants(NAMM)

(SproullAtriumatMillerCenter)

20:00

OptionalEveningConcert:MuPhiEpsilonInternationalCompetitionConcert

Locationdependsuponwhetherwinnersincludeanorganist:KilbournHallorChristChurchEpiscopal(acrossEastAvenue)

Suggested$10donationatthedoor

Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00‐9:00and13:00‐14:00.

Page 4: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow(numbersalsodisplayednexttoeachabstracttitle).Sessionchairslistedaftersessiontitlesbylastname.Start End

Friday,12August,2011

7:30 9:00 SpecialEvent:StudentBreakfast

(CominskyPromenade,2ndFloor,ESMMainBuilding)Co­sponsoredbyNAMMandSMPC

8:00 17:00 Registration(WolkAtrium)

8:00 9:00 ContinentalBreakfast(allexceptstudentattendees)(WolkAtrium)

9:009:209:4010:00

9:209:4010:0010:20

Imagery/Individualdifferences(London)HatchRecitalHallEitan&Granot(27)Benadon&Winkler(28)Aufegger&Vitouch(29)McAuley,Henry,&Wedd(30)

Auditorysystem(McAdams)ESMRoom120Schramm&Luebke(31)Saindon,Trehub,&Schellenberg(32)Bergeson&Peterson(33)Cariani(34)

10:20 10:40 Break

10:4011:0011:2011:40

11:0011:2011:4012:00

Symposium:MusicalmodelsofspeechrhythmandmelodyHatchRecitalHallBrown,Chow,Weishaar&Milko(35)Chow,Poon,&Brown(36)Dilley(37)Port(38)

Physiologicalresponses(Halpern)ESMRoom120Mitchell,Mogil,Koulis,&Levitin(39)Ladinig,Huron,Horn,&Brooks(40)Mitchell,Paisley,&Levitin(41)Upham&McAdams(42)

12:00 14:00 LunchBreak(allexceptSMPCexecutiveboard) ExecutiveBoardMeeting(RanletLounge,2ndFloor,EastmanTheatre)

14:0014:20

14:4015:0015:20

14:2014:40

15:0015:2015:40

Cross­culturaleffects(Tan)HatchRecitalHallHegde,Ramanjam,&Panikar(43)Athanasopoulos,Moran,&Frith(44)Kalender,Trehub,&Schellenberg(45)Beckett(46)Vempala&Russo(47)

Neuroscience(Large)ESMRoom120Herholz,Halpern,&Zatorre(48)Norman‐Haignere,McDermott,Fedorenko,&Kanwisher(49)Moussard,Bigand,&Peretz(50)Butler&Trainor(51)Iversen&Patel(52)

15:40 16:00 Break

16:00 18:00Postersession1AbstractsA.1–A.43

(EastmanEastWing415)18:00 OptionalEvent:EastEndFestival

Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00‐9:00and13:00‐14:00.

Page 5: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow(numbersalsodisplayednexttoeachabstracttitle).Sessionchairslistedaftersessiontitlesbylastname.Start End

Saturday,13,August,2011

8:00 17:00 Registration(WolkAtrium)

8:00 9:00 ContinentalBreakfast(WolkAtrium)

9:009:209:4010:00

9:209:4010:0010:20

Rhythm1(MacKenzie)HatchRecitalHallManning&Schutz(53)McAuley,Henry,Rajarajan,&Nave(54)Cogsdill&London(55)Poudrier&Repp(56)

Cognition1(Bergeson)ESMRoom120Schachner&Carey(57)Vuvan&Schmuckler(58)Koreimann&Vitouch(59)Houlihan&Levitin(60)

Metatheoreticalapproaches(Zibikowski)KilbournHallNarmour(61)Tirovolas&Levitin(62)Tan(63)Narmour(64)

10:20 10:40 Break

10:4011:0011:2011:40

11:0011:2011:4012:00

Rhythm2(McAuley)HatchRecitalHallAlbin,Lee,&Chordia(65)Riggle(66)London&Cogsdill(67)Ammirante&Thompson(68)

Tonalityandmelody(Cohen)ESMRoom120Sears,Caplin&McAdams(69)Brown(70)Parncutt&Sapp(71)Miller,Wild&McAdams(72)

Computationalmodeling(Bartlette)KilbournHallTemperley(73)Temperley&deClerq(74)Large&Almonte(75)Mavromatis(76)

12:00 13:00 Businessmeeting(HatchRecitalHall)

13:00 14:00 LunchBreak

14:0014:20

14:4015:0015:20

14:2014:40

15:0015:2015:40

Emotion2(Narmour)HatchRecitalHallMargulis(77)Plazak(78)Huron&Horn(79)Chordia&Sastry(80)Thompson,Marin,&Stewart(81)

Cognition2(Parncutt)ESMRoom120Rosenthal,Quam,&Hannon(82)Ashley(83)Creel(84)Mavromatis&Farbood(85)Anderson,Duane,&Ashley(86)

Musicandlanguage(VanHandel)KilbournHallMatsumoto&Marcum(87)Liu,Jiang,Thompson,Xu,Yang,&Stewart(88)Temperley&Temperley(89)Sullivan&Russo(90)Cox(91)

15:40 16:00 Break

16:00 17:30Postersession2AbstractsB.1–B.34

(EastmanEastWing415)

17:45 18:45Lecture­RecitalbyRandallHarlow

"Acousticsandpsychohapticsinapipeorganreconstruction:Eastman'sCraighead­SaundersOrgan"(ChristChurchEpiscopal)

19:00 Banquet(RochesterClubBallroom)

Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00‐9:00and13:00‐14:00.

Page 6: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:6Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow(numbersalsodisplayednexttoeachabstracttitle).Sessionchairslistedaftersessiontitlesbylastname.Start End

Sunday,14August,2011

8:00 9:00 ContinentalBreakfast(WolkAuditorium)

9:00

9:209:4010:00

9:209:4010:0010:20

Emotion3(Thompson)HatchRecitalHallEgermann,Pearce,Wiggins&McAdams(92)LeGroux,Fabra,&Verschure(93)Albrecht,Huron,&Morrow,(94)Russo&Sandstrom(95)

Performance1(Ashley)ESMRoom120Lisboa,Demos,Chaffin,&Begosh(96)Brown&Palmer(97)Gross(98)Devaney,Wild,Schurbert,&Fujinaga(99)

10:20 10:40 Break

10:4011:0011:2011:40

11:0011:2011:4012:00

Analyticalapproaches(Huron)HatchRecitalHallAziz(100)Liu,Sun,&Chordia(101)Sigler&Handelman(102)Samplaski(103)

Performance2(Beckett)ESMRoom120Kruger,McLean,&Kruger(104)Curtis,Hegde,&Bharucha(105)Poon&Schutz(106)Pfordresher,Tilton,Mantell,&Brown(107)

Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletime:8:00–9:00.

Page 7: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:7

Titlesandabstractsfortalks1EffectsofMusicalInstrumentonAbsolutePitchAbilityVivekV.Sharma*&DanielJ.LevitinMcGillUniversity,Montréal,Québec,Canada*=Correspondingauthor,[email protected](AP),theabilitytonamemusicaltoneswithoutanexternalreference,oftenreporttrainingonamusicalinstrumentfromayoungage(Sergeant,1969).TounderstandtheacquisitionprocessofAP,itwouldbeusefultoknowhowthemusicalinstrumentsplayedbyAPpossessorsinfluencethedevelopmentofpitchtemplatesintheirlong‐termmemory.Wehypothesizethatplayersoffixed‐pitchedinstrumentsidentifytonesfasterandmoreaccuratelythanplayersofvariable‐pitchedinstrumentsbecauseoftheformergroup'sgreaterexposuretoprecisepitchvalues,andtheconsequentpreferentialtuningofauditorysystemneuronstothosevalues.Totestourhypothesis,weexaminedhowAPmusicianslabeledintuneanddetunedpitches.Wetested10pianistsand10violinists.Tonesof3differenttimbreswerepresented:piano,violinandsinusoidal.Theirfrequenciesformedacontinuumofpitchclassesthatwereindividuallyseparatedby20¢intervalsandrangedfromC4toC5,inclusive,whereA4=440Hz.Dependentvariableswerethepercentagesofcorrectlylabeledtonesandreactiontimes.Theparticipantsalsoratedthegoodness‐of‐fitofeachtoneusingacontinuousscale.Becausethepianoisfixed‐pitched,itmayrepetitivelyreinforcethecodificationofpitchtoverballabelswithinthelong‐termmemorymoreeffectivelythanthevariable‐pitchedviolin.WesuspectthatthestudysupportsthehypothesizedeffectsoftonemappingandmusicaltrainingonAPacquisition,perceptionandmemory.2EmotionalJudgmentinAbsolutePitchPsycheLoui*,AnnaZamm,MatthewSachs,andGottfriedSchlaugBethIsraelDeaconessMedicalCenterandHarvardMedicalSchool,Boston,MA,USA*=Correspondingauthor,[email protected](AP)isauniquephenomenoncharacterizedbytheabilitytonamethepitchclassofanynotewithoutareference.Inrecentyears,APhasbecomeamodelforexploringnature‐nurtureinteractions.WhilepastresearchfocusedondifferencesbetweenAPsandcontrolsindomainssuchaspitchnaming,littleisknownabouthowAPpossessorstackleothermusicaltasks.InthisstudyweaskedwhetherAPpossessorsrecruitdifferentbrainresourcesfromcontrolsduringataskinwhichApsareanecdotallysimilartocontrols:thetaskofemotionaljudgment.FunctionalMRIwasacquiredfrom15APsand15controls(matchedinage,sex,ethnicity,andmusicaltraining)astheylistenedtomusicalsoundclipsandratedthearousallevelofeachcliponthescaleof1(low‐arousal)to4(high‐arousal),relativetoasilentrestcondition.Additionally,weacquiredDiffusionTensorImaging(DTI)datatoinvestigatewhitematterdifferencesbetweenAPpossessorsandcontrols.BehavioralresultsshowednosignificantdifferencebetweenAPsandcontrols.However,asecond‐levelcontrastbetweenmusicandrestconditionsshowedthatAPsrecruitedmoreneuralactivityinleftHeschl’sgyrus(primaryauditorycortex).Anothersecond‐levelcontrastbetweenhigh‐arousalandlow‐arousalmusicrevealedincreasedactivityinAPsintheleftposteriorsuperiortemporalgyrus(secondaryauditorycortex).DTIshowedthatAPshadlargerconnectionsbetweentheleftposteriorsuperiortemporalgyrusandtheleftposteriormiddletemporalgyrus,regionsthoughttobeinvolvedinsoundperceptionandcategorizationrespectively.DespiteabehavioraltaskdesignedtominimizedifferencesbetweenAPsandcontrols,weobservedsignificantbetween‐groupdifferencesinbrainactivityandconnectivity.ResultssuggestthatAPpossessorsobligatorilyrecruitextraneuralresourcesforperceivingandcategorizingmusicalsounds.

Page 8: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:83TheAbsolutePitchContinuum:EvidenceofIncipientAPinMusicalAmateursElizabethW.Marvin(1)*,ElissaL.Newport(2)(1)EastmanSchooloftheUniversityorRochester,Rochester,NYUSA(2)UniversityofRochester,Rochester,NYUSA*=Correspondingauthor,[email protected](AP)asanall‐or‐nothingproposition.Recentresearchrevealsadifferentpicture,however,suggestingthatAPabilitiesexistalongacontinuumandthatmanylisteners,somewithoutextensivemusicaltraining,encodepitchinformationaccuratelyinmemory(e.g.,Bermudez&Zatorre,2009;Ross,Olsen,Marks&Gore,2004;Schellenberg&Trehub,2003).Thispaperreportsnewdatathatsupportthecontinuumtheory.Threegroupsofparticipants(n=254)—professionalmusictheorists,freshmanmusicmajors,andliberal‐artsstudents—tookapitch‐namingtestandanimplicitlearningtestrequiringthemtodiscriminatebetweenpitchpatternslearnedduringafamiliarizationphaseandtheirtranspositions(seealsoSaffran&Griepentrog,2001).InaprevioustestofAPandnon‐APmusicians,scoresonthenamingandimplicitlearningtestswerehighlycorrelated(Marvin&Newport,2008).Inthecurrentwork,thosewhoshowedAPonpitchnaming(n=31)scoredsignificantlyhigherthanthosewhodidnot,verifyingthattheimplicitlearningtestdoesmeasurepitchmemory,withoutrequiringpitchlabels.Interestingly,examinationofindividualscoresontheimplicitlearningtaskrevealed12“incipientAP”participants(somefromeachgroup),whoscored88‐100%correctontheimplicitlearningtest(ashighasAPparticipants),butaveragedonly34%correctonpitchnaming.ThisprovidestheunusualopportunitytoexaminethepitchdiscriminationandmemoryabilitiesofapopulationoflistenerswhoappeartoexhibitstrongAPbutwithoutextensivemusicaltrainingorpitchlabelingstrategiesaspartofAP.On‐goingresearchteststheselistenersformicrotonalpitchdiscrimination,nonmusicalmemory(digitspan),andmusicalmemory(amusicaldigit‐spananalog).PreliminarydatashowcomparablescoresforAPmusiciansandforincipient‐APlisteners,eventhosewhoaremusicalamateurs.Thosewithhighscoresontheimplicitlearningtaskdonotscoresignificantlyhigheronmemoryteststhancontrols,thoughtheyshowbetterpitchdiscriminationinsomeregisters.4IdentifyingAbsolutePitchPossessorsWithoutUsingANote­NamingTask. RonaldWeisman*,Laura‐LeeBalkwill,MarisaHoeschele,MicheleMoscicki,andChristopherSturdyQueensUnversity,Kinston,Ontario,Canada*=Correspondingauthor,[email protected]‐namingtasksthatpresumefluencywiththescalesofWesternmusic.Ifnotenamingconstitutestheonlymeasure,thenbyfiat,onlytrainedmusicianscanpossessAP.HerewereportonanAPtestthatdoesnotrequireanote‐namingresponse.Theparticipantswere60musicians,whoself‐reportedAP.Ourpitchchromalabelingtaskwasadaptedfromchallengingoperantgo,no‐godiscriminations(Weisman,Niegovan,Williams,&Sturdy,2004)usedtotestsongbirds,rats,andhumanswithtonesmistunedtothemusicalscale.Inourpitch‐labelingtask,wepresentedsinewavetonestunedtothe12‐noteequal‐temperamentscale,inadiscriminationbetweenthefirstandsecond6notesinoctavesfour,five,andsix.ResultswerevalidatedagainstAthos,Levinson,Kisler,etal.'s(2007)sinewavenote‐namingtestofAP.ActualAPpossessors(n=15)beganmusictrainingearlierandhadmoremusictrainingthannonpossessors(n=45),but25nonpossessorsmatchedtoAPpossessorsinexperiencehadnohigherAPscoresthanothernonpossessors.Hereforsimplicitywereportpercentcorrectscoresforthepitchlabelingtask,butd',A',andpercentcorrectmeasureswereallhighlycorrelated,rs>.90.OvertrialsAPpossessorscametolabelthehalf‐octavemembershipofthe36toneswithM=90%accuracy;nonpossessorsscoredonlyslightlybetterthanchance,M=55%correct.Mostimportant,thepitch‐labelingtasksuccessfullyidentifiedtheAPstatusof58of60participantsonAthosetal.'stest.Infuturestudies,thepitch‐labelingtaskwillbeconvertedtoaweb‐basedprotocoltotestlargenumbersofnonmusicians.Then,usingourlabelingtaskinconjunctionwithRoss's(2004)reproductiontest,wehopetoaccuratelyidentifynonmusicianAPpossessorsorwithenoughparticipantsfromseveralpopulationscastdoubtonthehypothesisthatnonmusicianscanpossessAP.

Page 9: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:95AnEmpiricalTestoftheHonestSignalHypothesisLisaChan(1)*,LucyM.McGarry(1),VanessaCorpuz(1),FrankA.Russo(1)(1)RyersonUniversity,Toronto,Canada*=Correspondingauthor,lisa.chan@psych.ryerson.caSeveraltheoristshaveproposedthatmusicmighthavefunctionedinourevolutionaryhistoryasanhonestsignal(Cross&Woodruff,2008;Levitin,2008;alsoseeDarwin,1872).Thetermhonestsignaloriginatesinethology,whereithasbeenusedtorefertoasignalthathasevolvedtobenefitthereceiveraswellasthesignaler(e.g.,thedartfrog“advertises”itschemicaldefensestopredatorswithconspicuousskincoloration).Inthisstudyweassesswhethermusicmaybemore“honest”thanspeechwithregardtorevealingaperformer’strue(experienced)emotions.Performerswereinducedwithahappyorsademotionusinganemotioninductionprocedureinvolvingmusicandguidedimagery.Subjectiveevaluationofexperiencedemotionsuggestedthattheinductionswerehighlyeffective.Performersweresubsequentlyaskedtoperformsungorspokenphrasesthatwereintendedtoconveyeitherhappinessorsadness.Theintendedemotioncouldthusbecharacterizedascongruentorincongruentwiththeperformer’sinducedemotion.Recordingsofperformanceswereevaluatedbyparticipantswithregardtovalenceandbelievability.Valenceratingsrevealedthatperformersweresuccessfulinexpressingtheintendedemotionintheemotionallycongruentcondition(i.e.,highervalenceratingsforintendedhappythanforintendedsad)andunsuccessfulfortheemotionallyincongruentcondition(i.e.,intermediatevalenceratingsforintendedhappyandforintendedsad).Critically,songledtohigherbelievabilityratingsthanspeech,owedlargelytothehighbelievabilityofsongproducedwithsadexpression.Theseresultswillbediscussedinthecontextofthehonestsignalhypothesisandrecentevidenceformimicryinperceptionofsungemotion.6DefiningMusicasaStepTowardExplainingitsOriginRichardParncutt*UniversityofGraz,Austria *=Correspondingauthor,richard.parncutt@uni‐graz.at Sincethebreakdownoftonality(WagnertoSchoenberg)andtheemergenceofethnomusicology,musicologistshavebeenreluctanttodefinemusic,sincedefinitionsalwaysdependonhistorical,cultural,andacademiccontext.ButthesehistoricaldevelopmentsmerelyshowedthatmusicneednotbetonalandthatthedistinguishingfeaturesofWesternmusicshouldbeabsentfromageneraldefinition.Theyalsodrewattentiontothedifferentmeaningsof“music”anditstranslationsindifferentculturesandperiods.Today’stheoriesoftheorigin(s)ofmusicdifferinpartbecauseresearchersstillhavedifferentimplicitdefinitionsofmusic.Theproblemcanbesolvedbyspecifyingexactlywhatmusicisassumedtobe–whichincidentallyalsoallows“musicology”tobedefined.Adefinitionmightrunasfollows.Bothmusicandlanguageareacoustic,meaningful,gestural,rhythmic/melodic,syntactic,social,emotional,andintentional;musicandlanguagedifferinthatmusicislesslexical,morerepetitive,morespiritual,lesssociallyessential,andmoreexpertisebased.Ofcourseallthetermsintheselistsneedtobeexplainedandifpossibleoperationalized,andindividualclaimssupported.Giventhepaucityofreliableinformationaboutthebehaviorofearlyhumansthatcouldhaveinfluencedmusic’sdevelopment,weneedtoexplorenewapproachestoevaluatingtheoriesofitsorigin.Oneapproachistoevaluatetheextenttowhicheachtheorycanparsimoniouslyaccountfororpredictthelistedfeatures.Anotheristoevaluatethequantityandqualityofrelevantempiricalstudiesthatareconsistentwiththespecificprocessespositedinthetheory.Iwillpresentdetailsofthisnewsystematicapproachandbrieflyshowhowitcanbeusedtoevaluatetheoriessuchasthosebasedonmateselection,socialcohesion,andmotherese.

Page 10: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:107MusicalEmotions:Functions,Origins,EvolutionLeonidPerlovsky*HarvardUniversity,Cambridge,andAirForceResearchLab.,HanscomAFB,MA,USA *=Correspondingauthor,[email protected] Musicseemsanenigma.Existingtheoriescannotexplainitscognitivefunctionsorevolutionaryorigins.Hereahypothesisisproposedbasedonsynthesisofcognitivescienceandmathematicalmodelsofthemind,whichdescribesafundamentalroleofmusicinthefunctioningandevolutionofthemind,consciousness,andcultures.Thetalkconsidersancienttheoriesofmusicaswellascontemporarytheoriesadvancedbyleadingauthorsinthisfield.Thenitdiscussesahypothesisthatpromisestounifythefieldandproposesatheoryofmusicaloriginbasedonafundamentalroleofmusicincognitionandevolutionofconsciousnessandculture.Thetalkconsidersasplitinthevocalizationsofproto‐humansintotwotypes:onelessemotionalandmoreconcretely‐semantic,evolvingintolanguage,andtheotherpreservingemotionalconnectionsalongwithsemanticambiguity,evolvingintomusic.Theproposedhypothesisdepartsfromothertheoriesinconsideringspecificmechanismsofthemind‐brain,whichrequiredevolutionofmusicparallelwithevolutionofculturesandlanguages.Argumentsarereviewedthatevolutionoflanguagetowardthesemanticallypowerfultooloftodayrequiredemancipationfromemotionalencumbrances.Theopposite,nolesspowerfulmechanismsrequiredacompensatoryevolutionofmusictowardmoredifferentiatedandrefinedemotionality.Theneedforrefinedmusicintheprocessofculturalevolutionisgroundedinfundamentalmechanismsofthemind.Thisiswhytoday’shumanmindandculturescannotexistwithouttoday’smusic.Theproposedhypothesisgivesabasisforfutureanalysisofwhydifferentevolutionarypathsoflanguageswereparalleledbydifferentevolutionarypathsofmusic.Approachestowardexperimentalverificationofthishypothesisinpsychologicalandneuroimagingresearcharediscussed. 8MusicasaMarkerofHumanMigrations:AnAnalysisofSongStructurevs.SingingStylePatrickSavage(1)*,TomRzeszutek(1),VictorGrauer(2),Ying‐fenWang(3),JeanTrejaut(4),MarieLin(4),StevenBrown(1)(1)DepartmentofPsychology,Neuroscience&Behaviour,McMasterUniversity,Hamilton,Canada,(2)Independentscholar,Pittsburgh,USA,(3)GraduateInstituteofMusicology,NationalTaiwanUniversity,Taipei,Taiwan,(4)TransfusionMedicineLaboratory,MackayMemorialHospital,Taipei,Taiwan*=Correspondingauthor,[email protected] discovery that our genes trace the migration of all humans back to a single African “mitochondrial Eve” has had anenormous impact on our understanding of human pre‐history. Grauer (2006) has claimed that music, too, can trace pre‐historichumanmigrations,butcriticsarguethatmusic’stime‐depthistooshallow(i.e.,musicchangestoorapidlytopreserveancientrelationships).Wepredicted that ifanymusical featureswere tohave thenecessary time‐depth, theywouldbe thestructural features – rather than the performance features – of traditional group songs. To test this prediction, we usedCantometriccodingsof222traditionalgroupsongsfrom8Taiwaneseaboriginaltribestocreateseparatedistancematricesformusicbasedoneither songstructureor singing style. Surprisingly,bothdistancematriceswere significantly correlated(p<0.01) with genetic distances based on mitochondrial DNA – a migration marker with well‐established time‐depth.However,inlinewithourprediction,structure(r2=0.27)accountedfortwiceasmuchvarianceasperformancestyle(r2=0.13).Independentcodingofthesesongsusinganewclassificationschemethatfocusesexclusivelyonstructuralfeaturesconfirmedthe correlation with genes (r2=0.19). Further exploratory analyses of the different structural sub‐categories revealed thatfeaturesrelated topitch(e.g., intervalsize, scale)weremorestronglycorrelatedwithgenes(r2=0.24) than thoserelated torhythm (r2=0.12), text (r2=0.05), texture (r2=0.08), or form (r2=0.13). These results suggest that, while song structure –especiallypitch–maybeastrongermigrationmarkerthansingingstyle,manytypesofmusicalfeaturesmayhavesufficienttime‐depthtotrackpre‐historicpopulationmigrations.

Page 11: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:119ThePowerofMusic:TheCompositionandPerceptionofEmotioninMusicMeganTrenck*,PeterMartens,&JeffLarsenTexasTechUniversity,Lubbock,TX,USA*=Correspondingauthor,metrenck@gmail.comMelodyhashadaprominentplaceinrecentstudiesontheemotionalcontentofmusicsuchasBrower(2002)andKrumhansl(2002).Further,CollierandHubbard(2001)claim“thatemotionalvalencemaybebasedmoreonthehorizontalratherthantheverticalaspectofmusic.”Toinvestigatesomespecificsofwhatmakesemotionsattributabletomelody,acombinationofundergraduateandgraduatemusicmajorsatTexasTechUniversitywereaskedtocomposeamelodydepictingeitherhappinessorsadness.Norestrictionswereplacedontheuseoftimesignature,keysignature,ortempo,butmelodieswererestrictedtoonemonophoniclineofmusic.Melodieswereanalyzedfortheirdistributionoffirst‐orderintervals(intervalsbetweenadjacentnotes),melodicspans(distanceamelodytravelsinonedirectionbeforeacontourchange,measuredinsemitones),andadditionalwaysmelodiesbehaverelativetotheirevolvingpitchmean.ThefindingscorroboratesomeoftheperceptualconclusionsofGabrielsson(2009)andCollierandHubbard(2001),whofoundthatnarrowrangesbringoutsadnesswhereashappinessisderivedfromwidemelodicrangesandlargerleaps.Next,aperceptualstudywasconductedtohelpdeterminehowwellmelodiesportrayedtheintendedemotions.Forty‐nineundergraduatemusicmajorsratedtheirperceptionsineachofthemelodiesoftwelvedifferentemotions,halfsadspectrumandhalfhappyspectrumemotions.Asexpected,melodiesdepictinghappinesswerecomposedinthemajormodeandmelodiesdepictingsadnesswerelargelycomposedintheminormode.Ratingsofemotionsseemednotonlytobebasedonthemodeofthemelody,butalsoonthenotedensity,whichappearedtoamplifyordampeneffectsofmodeonperceivedemotions.Confirmingtheresultsoftheperceptualstudy,thesemethodsofmelodicanalysissuggesthowcomposersmightattempttoportraydifferentemotionswithinthemusicaldomainofmelody.10TheEffectsofExpertiseonMovement­mediatedEmotionalProcessinginMusicLucyMcGarry(1)*,andFrankRusso(1)(1)RyersonUniversity,Toronto,Canada*=Correspondingauthor,lmcgarry@psych.ryerson.caManystudieshavedemonstratedthatmimicryofemotionalgesturesaidsintheirrecognition.Inthecontextofmusic,mimicryofperformanceoccursautomaticallyandhasbeenhypothesizedtomediatemusicalunderstanding(Livingstone,Thompson&Russo,2009).Inthecurrentstudy,weexaminedwhetherexaggeratedmimicryoftheemotionalthemesinmusic,suchasthatwhichoccursduringdance,enhancesunderstandingofemotionconveyedbymusicinasimilarwaytomotormimicryinsocialcontexts.Thirtydancersand33non‐dancersweretestedusingawithin‐subjectsdesign.ParticipantslistenedtomusicalclipsfromtheBachCelloSuites,selectedbasedonpilotratingstobehighinarousalandvalence(happy),orlowinarousalandvalence(sad).Duringmusiclistening,participantsfollowedrandomizedinstructionstomovehandsfreelytothemusic,sitstill,ormoveinaconstrainedmanner.Afterwards,allsongclipswereheardagainwhilephysiologicalresponsesandratingsweretaken.Resultsdemonstratedabeneficialeffectoffreemovementonsubsequentemotionalengagementwithmusicfordancersonly.Foreachmeasurementwecomputedapolarizationscorebycalculatingthedifferencebetweenresponsestohappy(higharousal,positivevalence)andsad(lowarousal,negativevalence)music.Zygomatic(smiling)muscleactivation,skinconductancelevels,valenceandarousalratingsallshowedenhancedpolarizationinthefreemovementcondition.Inaddition,zygomaticactivitymediatedvalenceandarousalratingsindancers.Non‐dancersdidnotdemonstratethesepolarizations.Ourresultssuggestthatmovementexpertslikedancersrelymoreonmovementtoprocessemotionalstimuliinmusic.Futurestudiesshouldexaminewhetherthisisduetoapersonalitydifferencebetweendancersandnon‐dancers,oranexpertiseeffect.

Page 12: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1211TheEmotionalConnotationsofDiatonicModesDavidTemperley*,&DaphneTanEastmanSchoolofMusic,Rochester,NY,USA*=Correspondingauthor,dtemperley@esm.rochester.eduDiatonicmodesarethescalesthatresultwhenthetonicisshiftedtodifferentpositionsofthediatonic(major)scale.GiventheCmajorscale,forexample,thetonicmaybeleftatC(Ionianor“major”mode)orshiftedtoD(Dorian),E(Phrygian),F(Lydian),G(Mixolydian)orA(Aeolianor“naturalminor”).ManymusicalstylesemploydiatonicmodesbeyondIonian,includingrockandothercontemporarypopularstyles.Experimentalstudieshaveshownthatthemajormodeandcommon‐practiceminormode(whichisnotthesameasAeolianmode)havepositiveandnegativeemotionalconnotations,respectively.Butwhatoftheotherdiatonicmodes?Onepossiblehypothesisisthatmodeswithmoreraisedscaledegreeshavemorepositive(“happier”)emotionalconnotations(Huron,Yim,&Chordia,2010).Anotherpossiblehypothesisisthattheconnotationsofmodesaredrivenmainlybyfamiliarity,thereforethescalesmostsimilartothemajormode(themostfamiliarmodeformostWesternlisteners)wouldbehappiest.Thepredictionsofthesetwohypothesesarepartiallyconvergent,butnottotally:inparticular,theLydianmodeispredictedtobehappierthanmajorbythe“height”hypothesisbutlesshappybythe“familiarity”hypothesis.Inthecurrentexperiment,asetofdiatonicmelodieswascomposed;variantsofeachmelodywereconstructedineachofthesixdifferentmodes.Inaforced‐choicedesign,non‐musicianparticipantsheardpairsofvariants(i.e.thesamemelodyintwodifferentmodes,withafixedtonicofC)andhadtojudgewhichwashappier.Thedatareflectaconsistentpattern,withhappinessdecreasingasflatsareadded.Lydianisjudgedlesshappythanmajor,however,supportingthe“familiarity”hypothesisoverthe“height”hypothesis.12EmotionalResponsesto(Modern)ModesPeterMartens*,JeffLarsenTexasTechUniversity,Lubbock,TX,USA*=Correspondingauthor,[email protected],abipartitecategorizationofmodesandtheiraffectwascommon,basedonthequalityofthemode’sthirdscalestep.Modeswithaminor3rdabovetheirinitialpitchinthispositionweregroupedtogetherassad,thosewithamajor3rdinthisposition,happy(e.g.Zarlino,1558).Recentresearchhassubstantiatedtheseassociationswithmodernmajorandminorscales(minor=sad,major=happy).Thegoalofthisstudyistoexploreifandhowsubjectsdifferentiatescalestructuresthatliesomewherebetweenmajorandminorscalesonthebasisofemotionalcontent.Sixfour‐bardiatonicmelodieswerenewlycomposed,witheachmelodycastinthesixmosthistorical“modern”modes:Ionian,Dorian,Phrygian,Lydian,Mixolydian,andAeolian.StimuliwerecreatedusingaclassicalguitarsoundwithinLogicsoftware,andeventdensitywasheldconstant.Inapilotstudysubjectsratedcomposers'intentintermsofelicitinghappinessandsadnessforthreeofthemelodiesinallsixmodes.Thestimuliwerepresentedinoneoftworandomorders,andsubjectsheardeachstimulusonce.Preliminaryresultsindicatethatasimplemajor/minorcategorizationdoesnotsufficientlyexplainsubjectresponses.Asexpected,theIonianmode(major)andtheLydianmodewerestronglyassociatedwithhappinessoverall,butnotsignificantlymoresothanAeolian(naturalminor).Bycontrast,Dorianstoodaloneashavingastrongassociationwithsadness.Phrygianwasweaklyhappy,whileMixolydianresponseswereneutral.WhymightDorianbe,tomisquoteNigelTufnel,“thesaddestofallmodes?”TheDorianmodecontainsaminorthirdandminorseventhscalestep,butamajorsixth.Thisisacommonmixtureofcharacteristicsfromthemajorandminorscales(e.g.1960sfolkmusic),whichperhapsheightenedarousalwhenlisteningtothesegenerallyminor‐soundingDorianmelodies,andthustheenlargedeffect.

Page 13: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1313ProductionandPerceptionofFacialExpressionsduringVocalPerformanceStevenR.Livingstone*,CarolinePalmerandMarceloWanderley(1),WilliamFordeThompson(2)(1)McGillUniversity,Montreal,Canada(2)MacquarieUniversity,Sydney,Australia*=steven.livingstone@mcgill.caMuchempiricalandtheoreticalresearchoverthelastdecadeconcernstheproductionandperceptionoffacialandvocalexpression.Researchhaspredominantlyfocusedonstaticrepresentationsoffacialexpressions(photographs),despitethefactthatfacialandvocalexpressionsaredynamicandunfoldovertime.Theroleofthisdynamicinformationinemotionalcommunicationisunclear.Wereporttwoexperimentsontheroleoffacialexpressionsintheproductionandperceptionofemotionsinspeechandsong.InExperiment1,twelvesingerswithvocalexperiencespokeorsungstatementswithoneoffiveemotionalintentions(neutral,happy,veryhappy,sadandverysad).Participants’facialmovementswererecordedwithmotioncapture.FunctionalDataAnalyseswereappliedtomarkertrajectoriesfortheeyebrow,lipcorner,andlowerlip.Functionalanalysesofvarianceonmarkertrajectoriesbyemotionindicatedsignificantlydifferenttrajectoriesacrossemotionconditionsforallthreefacialmarkers.Emotionalvalencewasdifferentiatedbymovementofthelipcornerandeyebrow.SongexhibitedsignificantlylargermovementsthanSpeechforthelower‐lip,butdidnotdiffersignificantlyformotionofthelipcornerandeyebrow.Interestingly,movementsinspeechandsongwerefoundpriortotheonsetofvocalproduction,andcontinuedlongaftervocalproductionhadended.Theroleoftheseextra‐vocalisationmovementswasexaminedinExperiment2,inwhichparticipantsjudgedtheemotionalvalenceofrecordingsofspeaker‐singersfromExperiment1.Listenersviewed(noaudio)theemotionalintentions(neutral,veryhappy,verysad)indifferentpresentationmodes:pre‐vocal‐production,vocal‐production,andpost‐vocal‐production.Preliminaryresultsindicatethatparticipantswerehighlyaccurateatidentifyingallemotionsduringvocal‐productionandpost‐vocal‐production,butweresignificantlylessaccurateforpre‐vocalproduction.Thesefindingssuggestthatspeechandsongsharefacialexpressionsforemotionalcommunication,transcendingdifferencesinproductiondemands.14DifferentialEffectsofArousalandPleasantnessinCrossmodalEmotionalTransferfromtheMusicaltotheComplexVisualDomainManuelaM.Marin(1)*,BrunoGingras(2),JoydeepBhattacharya(1)(3)(1)DepartmentofPsychology,Goldsmiths,UniversityofLondon,UK,(2)DepartmentofCognitiveBiology,UniversityofVienna,Austria,(3)CommissionofScientificVisualization,AustrianAcademyofSciences,Vienna,Austria*=Correspondingauthor,manuela.m.marin@gmail.comThecrossmodalprimingparadigmisanewapproachtoaddressbasicquestionsaboutmusicalemotions.Recentbehaviouralandphysiologicalevidencesuggeststhatmusicalemotionscanmodulatethevalenceofvisuallyevokedemotions,especiallythosethatareinducedbyfacesandareemotionallyambiguous(Chen,Yuan,Huang,Chen,&Li,2008;Logeswaran&Bhattacharya,2009;Tan,Spackman,&Bezdek,2007).However,arousalplaysacrucialroleinemotionalprocessing(Lin,Duann,Chen,&Jung,2010;Nielen,Heslenfeld,Heinen,VanStrienen,Witter,Jonker&Veltman,2010;Russell,1980)andmayhaveconfoundedtheseprimingeffects.Weinvestigatedtheroleofarousalincrossmodalprimingbycombiningmusicalprimes(Romanticpianomusic)differinginarousalandpleasantnesswithcomplexaffectivepicturestakenfromtheInternationalAffectivePictureSystem(IAPS).InExperiment1,thirty‐twoparticipants(16(8female)musicians,16(8female)non‐musicians)reportedtheirfeltpleasantness(i.e.valence)andarousalinresponsetomusicalprimesandvisualtargets,presentedseparately.InExperiment2,fortynon‐musicians(20female)ratedfeltarousalandpleasantnessinresponsetovisualtargetsafterhavinglistenedtomusicalprimes.Experiment3soughttoruleoutthepossibilityofanyordereffectsofthesubjectiveratingsandresponsesoffourteennon‐musicianswerecollected.TheresultsofExperiment1indicatedthatmusicaltrainingwasassociatedwithelevatedarousalratingsinresponsetounpleasantmusicalstimuli,whereasgenderaffectedthecoupling‐strengthbetweenarousalandpleasantnessinthevisualemotionspace.Experiment2showedthatmusicalprimesmodulatedfeltarousalinresponsetocomplexpicturesbutnotpleasantness,whichwasfurtherreplicatedinExperiment3.Thesefindingsprovidestrongevidenceforthedifferentialeffectsofarousalandpleasantnessincrossmodalemotionaltransferfromthemusicaltothecomplexvisualdomainanddemonstratetheeffectivenessofcrossmodalprimingparadigmsingeneralemotionresearch.

Page 14: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1415MusicCanConveyMovementlikeProsodyinSpeechStephenC.Hedger(1)*,HowardC.Nusbaum(1),BertholdHoeckner(1)(1)TheUniversityofChicago:Chicago,IL,U.S.A*=Correspondingauthor,shedger@uchicago.eduAnalogvariationintheprosodyofspeechhasrecentlybeenshowntocommunicatereferentialanddescriptiveinformationaboutobjects(Shintel&Nusbaum,2007).Giventhatcomposershaveusedsimilarmeanstoputativelycommunicatewithmusic,weinvestigatedwhetheracousticvariationofmusicalpropertiescananalogicallyconveydescriptiveinformationaboutanobject.Specifically,wetestedwhethertemporalstructureinmusicisintegratedintoananalogperceptualrepresentationasanaturalpartoflistening.Listenersheardsentencesdescribingobjectsandthesentenceswereunderscoredwithacceleratingordeceleratingmusic.Aftereachsentence‐musiccombination,participantssawapictureofastillormovingobjectanddecidedwhetheritwasmentionedinthesentence.Objectrecognitionwasfasterwhenmusicalmotionmatchedvisuallydepictedmotion.Theseresultssuggestthatvisuo‐spatialreferentialinformationcanbeanalogicallyconveyedandrepresentedbymusic.16WhatDoesSeeingthePerformerAdd?ItDependsonMusicalStyle,AmountofStageBehavior,andAudienceExpertiseCarolLynneKrumhansl(1)*,JenniferHuang(1,2)(1)CornellUniversity,Ithaca,NYUSA(2)HarvardUniversity,CambridgeMAUSA*=Correspondingauthor,clk4@cornell.eduThepurposeofthisstudywastoexaminetheeffectsofstagebehavior,expertise,composer,andmodalityofpresentationonstructural,emotional,andsummaryratingsofpianoperformances.Twenty‐fourmusicallytrainedand24untrainedparticipantsratedtwo‐minuteexcerptsofpiecesbyBach,Chopin,andCopland,eachperformedbythesamepianist,whowasaskedtovaryhisstagebehaviorfromminimaltonaturaltoexaggerated.Participantsratedtheperformancesundereitheraudio‐onlyoraudiovisualconditions.Therewerestrongeffectsofcomposer,stagebehavior,andresponsescaletype,aswellasinteractionsinvolvingthesethreevariablesandmodalityofpresentation.Thecomposer'sstylehadaconsistentlystrongeffectontheperformanceevaluations,highlightingtheimportanceofcarefulrepertoireselection.Theinteractionbetweenexpertise,modality,andstagebehaviorrevealedthatnon‐musiciansperceiveddifferencesacrossthethreedegreesofstagebehavioronlyaudiovisuallyandnotintheaudio‐onlycondition.Incontrast,musiciansperceivedthesedifferencesunderbothaudiovisualandaudio‐onlyconditions,withthelowestratingsforminimalstagebehavior.Thissuggeststhatvaryingthedegreeofstagebehavioralteredthequalityoftheperformance.Inaddition,theparticipantswereaskedtoselecttwoemotionsthatbestcharacterizedeachperformance.TheypreferentiallychosemoresubtleemotionsfromHevner's(1936)AdjectiveCircleoverthefiveemotionsofhappiness,sadness,anger,fear,andtendernesstraditionallyusedinmusiccognitionstudies,suggestingthatthesefiveemotionsarelessapttodescribetheemotionsconveyedthroughmusicalperformance.

Page 15: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1517EffectsofInteractionswithYoungChildrenonJapaneseWomen’sInterpretationofMusicalBabblingsMayumiAdachi*HokkaidoUniversity,Sapporo,Japan*=Correspondingauthor,m.adachi@let.hokudai.ac.jpJapanesemothersandyoungwomentendtointerpretaJapanesetoddler'sbabblingsderivingfrominfant‐directedspeechcontextsasspeech‐likeandthosefrominfant‐directedsongcontextsassong‐like.Inthepresentstudy,IinvestigatedwhetherinteractionswithyoungchildrencouldaffecttheuseofvocalcuesamongJapanesemothers(Experiment1)andJapaneseyoungwomen(Experiment2).InExperiment1,23JapanesemotherswhoparticipatedinAdachiandAndo(2010)fellintotwogroupsbasedonthescores(0‐12)ofhowactivelytheyweresinging/talkingtotheirownchild:“active”(scores8‐12,n=13)and“lessactive”(scores3‐7,n=10).Eachmother'sdatawereusedtodeterminevocalcuesthatcontributedherowninterpretationof50babblingsbymeansofstep‐wisevariableselectionoflogisticregression,withtheinterpretationasdependentvariable(song­likeversusspeech­like)and15vocalfeatures(identifiedinAdachi&Ando,2010)aspredictorvariables.InExperiment2,thesameanalyseswillbeconductedwithdataobtainedfromJapaneseyoungwomenwhohadbeeninteractingwith6‐year‐oldsoryounger(“experienced”)andfromtheirmatchedsamplewithoutsuchinteractions(“inexperienced”).ResultsinExperiment1revealedthat11outof13“active”motherswereusingparticularcuesconsistentlywhileonly4outof10“lessactive”mothersweredoingso,χ2(1,N=23)=4.960,p=.026.Inaddition,amongthemothersusingparticularcues,the“active”mothersusedtheaverageofmorethan3cueswhilethe“lessactive”mothersusedtheaverageof1cue.(ResultsinExperiment2willbepresentedatthetalk.)Thepresentstudywillrevealtheroleofinteractionswithyoungchildreninthecurrentandtheprospectivemothers'interpretationsofsong‐like/speech‐likebabblings.Suchinformationmayexplainwhysometoddlersproducemorespontaneoussongsthanothers.18Age­relatedChangesinChildren’sSingingSandraE.Trehub(1)*,LilyZhou(2),JudyPlantinga(1),MayumiAdachi(3)(1)UniversityofToronto,Ontario,Canada,(2)McMasterUniversity,Hamilton,Ontario,Canada,(3)HokkaidoUniversity,Sapporo,Japan*=Correspondingauthor,[email protected]‐relatedimprovementsinchildren’ssinging,usuallybyexpertratingsratherthanmeasurement.However,nostudyhasattemptedtoidentifyfactorsthatmaycontributetosuchimprovement.Adultssinglessaccuratelywithlyricsthanwithaneutralsyllable(Berkowska&DallaBella,2009),highlightingthedemandsofwordretrieval.Perhapsage‐relateddifferencesinsingingaccuracyareattributable,inpart,toage‐relatedchangesinmemory.Inthepresentstudywefocusedonintervalaccuracyandsingingrateinchildren’srenditionofafamiliarsong(ABC,Twinkle)sungwithwordsoronthesyllable/la/.Childrenwere4‐12years,24at4‐6,7‐9,and10‐12years.Thefirst17beatsofeachperformancewereanalyzedbymeansofPraat.Durationofthefirsttwomeasuresprovidedanestimateofsingingrate.Aregressionanalysiswithgender,age,andsingingraterevealedsignificanteffectsofage(greaterpitchaccuracyatolderages)andsingingrate(greateraccuracywithslowersinging)inperformanceswithlyrics.Regressionanalysisonsongssungon/la/revealednodifferences,callingintoquestionclaimsofincreasingsingingproficiencyinthisagerange.Atwo‐wayANOVA(intervalsize,lyrics/syllables)revealedsignificanteffectsofintervalsize(greaterpitchdeviationswithlargerintervals),F(4,284)=44.79,p<0.001,andlyrics(greaterpitchdeviationswithlyrics),F(1,71)=9.18,p=.003.Regressionanalysisalsorevealedthatageandsingingratehadsignificanteffectsonkeystability,asreflectedindeviationsfromthetonic,butonlyforperformanceswithlyrics.Inshort,theneedtoretrievelyricsandtuneshasadverseconsequencesforchildren,asreflectedinreducedpitchaccuracy,poorkeystability,andslowsingingrate.Wesuggestthatthedevelopmentofsingingproficiencyinchildhoodcouldbestudiedmoreproductivelybymeansofpitchandintervalimitation.

Page 16: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1619DoInfantsPerceivetheBeatinMusic?ANewPerceptualTestAniruddhD.Patel(1)*,JohnR.Iversen(1),MelissaBrandon(2),JennySaffran(2)(1)TheNeurosciencesInstitute,SanDiego,USA(2)UniversityofWisconsin,Madison,USA*=Correspondingauthor,apatel@nsi.eduBeatperceptionisfundamentaltomusicperception.Howearlydoesthisabilitydevelop?Whileinfantsdonotsynchronizetheirmovementstoamusicalbeat(Zentner&Eerola,2010),itispossibletheycanperceivethebeat,justassophisticatedspeechperceptionabilitiesprecedetheabilitytotalk.Henceevidenceforinfantbeatperceptionmustcomefromperceptualtests.Arecentevent‐relatedpotential(ERP)studyofbeatperceptioninsleepingnewbornssuggestedthattheyrecognizedtheomissionofthedownbeatinadrumpattern(Winkleretal.,2009),butthedownbeatomissionstimulus(unlikeomissionsatotherpositions)wascreatedbysilencingtwodrumpartsatonce,makingitpossiblethatthebrainresponsewastoachangeinthetextureofthesoundratherthanareflectionofbeatperception.Otherstudiesofinfantmeterperceptionhaveusedcross‐modalapproaches(e.g.,Phillips‐Silver&Trainor,2005)orcross‐culturalapproaches(e.g.,Hannon&Trehub,2005),butthesensitivitiesdemonstratedbyinfantsinthesestudiesmaybeexplainableonthebasisofgroupingperceptionand/orsensitivitytoeventdurationpatterns,withoutinvokingbeatperception.Thecurrentstudyusedanovelperceptualtesttoexaminebeatperceptionin7‐8montholdinfants.ThiswasasimplifiedversionoftheBAT(BeatAlignmentTest,Iversen&Patel,2008),inwhichametronomicbeeptrackisoverlaidonlongexcerptsofrealmusic(popularBroadwayinstrumentaltunes).Thebeepswereeitheronthebeat,toofast,ortooslow.Apreferentiallookingparadigmwasused,anditwasfoundthatinfantspreferredthemusicwiththeon‐beatoverlaytracks,suggestingthattheydoinfactperceivethebeatofcomplexmusic.ThepresentationwillincludeadiscussionofhowtheBATmightbeimprovedforfutureresearchoninfantandadultbeatperception.20TheDevelopmentofSensitivitytoKeyMembershipandHarmonyinYoungChildrenKathleenA.Corrigall(1)*,LaurelJ.Trainor(1,2,),(1)McMasterInstituteforMusicandtheMind,Hamilton,Canada,(2)RotmanResearchInstitute,BaycrestHospital,Toronto,Canada*=Correspondingauthor,corrigka@mcmaster.caEvenWesternadultswithnoformalmusictraininghaveimplicitknowledgeofkeymembership(whichnotesbelonginakey)andharmony(chordsandchordprogressions).However,littleresearchhasexploredthedevelopmentaltrajectoryoftheseskills,especiallyinyoungchildren.Thus,ourprimarygoalwastoinvestigate4‐and5‐year‐olds’knowledgeofkeymembershipandharmony.Oneachtrial,childrenwatchedvideosoftwopuppetsplayinga2‐to3‐barmelodyorchordsequenceandwereaskedtogiveaprizetothepuppetthatplayedthebestsong.OnepuppetplayedastandardversionthatfollowedWesternharmonyandvoiceleadingrulesandalwaysendedonthetonic,whiletheotherplayedoneofthreedeviantversions:1)atonal,whichdidnotestablishanyparticularkey,2)unexpectedkey,whichreplicatedthestandardversionexceptforthelastnoteorchord,whichwentoutsidethekey,and3)unexpectedharmony,whichreplicatedthestandardversionexceptforthelastnoteorchord,whichendedonthesubdominant.Childrenwereassignedtooneofthethreedeviantconditions,andcompletedfourtrialseachofmelodiesandchords.Ourdependentmeasurewastheproportionoftrialsonwhichchildrenselectedthepuppetthatplayedthestandard.Resultsofthe354‐year‐oldsand365‐year‐oldstestedtodaterevealedthat5‐year‐oldsselectedthestandardversionsignificantlymoreoftenthanpredictedbychanceforbothmelodiesandchordsintheatonalandunexpectedkeyconditions.Intheunexpectedharmonycondition,5‐year‐olds’performancedidnotdifferfromchanceforeithermelodiesorchords.Four‐year‐oldsperformedatchanceinallconditions.Theseresultsindicatethataspectsofmusicalpitchstructureareacquiredduringthepreschoolyears.

Page 17: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1721InvestigatingMothers’LiveSingingandSpeakingInteractionwithPretermInfantsinNICU:PreliminaryResultsManuelaFilippa(1)*,MayaGratier(1)(1) UniversitéParisOuestNanterreLaDéfense,Paris,France,*mgfilippa@libero.itVocalcommunicationbetweenmothersandinfantsiswelldocumentedinthefirstmonthsoflife(Gratier&Apter,2009),butfewobservationsinvolvedpreterminfants.Thisarticlereportsonthetheoreticalunderpinningsofthestudyofmaternalsinging,consideredasanimportantrelationalbasedinterventioninearlydyadiccommunicationbetweenmothersandpreterminfants.Todate,10outof20pretermneonateshavebeenstudied.Theirweightatbirthwasbetween950and2410gandtheentrycriteriaatthetimeoftherecordingswere(1)>29weeksPCA,(2)>1000g,(3)stablecondition(absenceofmechanicalventilation,noadditionaloxygenneeded,nospecificpathologicalconditions).Allinfantsaretestedfor6days,duringtheirhospitalstayintheirroomintheirownincubators,onehouraftertheirafternoonfeeding.Allmothersinvolvedareasked,on3differentdays,tospeakandsingtotheirinfants.Beforeandbetweenthesedays,adaywithnostimulationprovidescomparisondata.Thesessionsarevideoandaudiorecordedbothduringthestimulationandalsofor5minutesbeforeandafterthestimulation.Clinicalparametersareautomaticallyrecordedeveryminuteand“criticalevents”aremarked;individualbehavioralandinteractionalreactionresponsesaremeasuredasinfantengagementsignals.DuringmaternalvocalstimulationwefoundanincreaseofHRandOxygenSaturation(p>0.05);adecreaseinstandarddeviationofclinicalparameters,adecreaseof“criticalevents”andanincreaseofcalmalertnessstates.Theresultsindicatethatthematernallivespeakingandsingingstimulationhasanactivationeffectoninfant,asweobserveanintensificationoftheproto‐interaction,ofthecalmalertnessstates(Als,1994)inaclinicallystablecondition,withasignificantincreaseofHRandOxygenSaturation(p>0.05).22Koechlin'sVolume:EffectsofNativeLanguageandMusicalTrainingonPerceptionofAuditorySizeamongInstrumentTimbresFrédéricChiasson(1)(2)(3)*,CarolineTraube(1)(2)(3),ClémentLagarrigue(1)(2),BennethSmith(3)(4)andStephenMcAdams(3)(4)(1)Observatoireinterdisciplinairederechercheetcréationenmusique(OICRM),Montréal,Canada,(2)Laboratoireinformatique,acoustiqueetmusique(LIAM),Facultédemusique,UniversitédeMontréal,Canada,(3)CentreforInterdisciplinaryResearchinMusic,MediaandTechnology(CIRMMT),Montréal,Canada,(4)SchulichSchoolofMusic,McGillUniversity,Montréal,Canada.*=Correspondingauthor,[email protected]'sorchestrationtreatise(Traitédel'orchestration)ascribesdifferentdimensionstotimbrethanthoseusuallydiscussedinmultidimensionalscalingstudies:volumeorlargeness,relatedtoauditorysize,andintensité,relatedtoloudness.Koechlingivesameanvolumescaleformostorchestralinstruments.Studiesshowthatauditorysizeperceptionexistsformanysoundsources,butnoneprovesitsrelevanceforinstrumentsfromdifferentfamilies.Forbothexperimentsofthisstudy,wehavedevelopedmethodsandgraphicalinterfacesfortestingvolumeperception.SamplesofeightorchestralinstrumentsfromtheElectronicMusicStudiooftheUniversityofIowa,playingBsandFsmezzoforteinoctaves3,4and5(whereA4is440Hz)wereused.Wekeptthefirstsecondofallsamples,keepingattacktransients,andaddedfade‐outtothelast100ms.Foreachpitchcategory,sampleswereequalizedinpitch,butnotinloudness,tokeeptheloudnessdifferencesinaconcertsituation.Task1requiredparticipantstoordereightsetsofsamplesonalargeness(grosseurinFrench)scalefrom"leastlarge"(moinsgros)to"largest"(plusgros).Task2requiredthemtoevaluatethesounds'largenessonaratioscalecomparedtoareferencesamplewithavalueof100.Participantswerecomparedaccordingtonativelanguage(EnglishvsFrench),musicaltraining(professionalmusiciansvsamateursandnonmusicians)andhearing(goodvsminorhearingloss).Resultssuggestthatparticipantsshareacommonperceptuallargenessamonginstrumenttimbresfromdifferentfamilies.ThiscommonperceivedlargenessiswellcorrelatedwithKoechlin'svolumescale.Nativelanguage,musicaltrainingandhearinghavenosignificanteffectonresults.Theseresultsprovidenewanglesfortimbreresearchandraisequestionsabouttheinfluenceofloudnessequalizationinmoststudiesontimbre.

Page 18: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1823PerceptionofDyadsofImpulsiveandSustainedSoundsDamienTardieu(1)*,StephenMcAdams(2)(1)IRCAM–CNRSUMR9912,Paris,France(2)CIRMMT,SchulichSchoolofMusic,McGilUniversity,Montreal,Canada*=Correspondingauthor,Damien.Tardieu@ircam.frPerceptionofinstrumentalblendsisimportantforunderstandingaspectsoforchestration.Previouswork(Kendall&Carterette,1991;Sandell1995)hasfocusedondyadsofsustainedsounds.Howeveracommontechniqueinorchestrationconsistsofusingmixturesofimpulsiveandsustainedsounds.Thefirstexperimentidentifiedthefactorsthatinfluencetheblendingofdyads,i.e.,whethertheyareperceivedasoneortwosounds.11sustainedand11impulsivesoundsofthesamepitchandloudnesswereusedyieldingatotalof121dyads.Eachparticipantratedeachdyadfourtimesonacontinuousscalebetween“twoness”toindicatetheabsenceofblend,and“oneness”toindicateperfectblend.Wefoundthatlongerattacktimesandlowerspectralcentroidsimprovefusion.Thechoiceoftheimpulsivesoundthusseemsmoreimportantthanthechoiceofthesustainedsoundincontrollingblend.Thesecondexperimentdeterminedthefactorsthatinfluencetheperceptionofsimilaritybetweenthedyads.Participantsratedthedissimilarityonacontinuousscalebetweenpairsformedof16well‐blendeddyadschosenfromtheprevious121tomaximizethetimbralvariety.Inthisexperiment,contrarytothefirstexperiment,thesustainedinstrumenthadmoreinfluenceontheperceptionofsimilarity.Themeanspectralenvelopeofthedyadisthefeaturethatbestexplainsthesimilarityratings,butthespectralenvelopeofthesustainedsoundismoreimportantthanthespectralenvelopeoftheimpulsivesound.Amultidimensionalscalingofthedyaddissimilarityratingsyieldsonedimensioncorrelatedwiththeattacktimeofthedyadandanotherdimensionwhosespectralcorrelateisdifferentfortwodifferentclusterswithinthespace,suggestingacombinedcategorical‐analogicalorganizationoftheseconddimension.24Blend,Identification,andSimilarityofDifferentiallyOrchestratedWindTriadsCorrelatedwithAcousticalAnalysesofSpectralDistributionandRoughnessRogerA.Kendall(1)*,PantelisN.Vassilakis(2)(1)MusicCognitionandAcousticsLaboratory,HerbAlpertSchoolofMusic,UniversityofCalifornia,LosAngeles,LosAngeles,Ca.90024,[email protected](2)AudioArts+Acoustics,SchoolofMediaArts,ColumbiaCollegeChicago,Chicago,IL60605*=Correspondingauthor,[email protected]‐traditionalorchestrationsofflute,oboe,andclarinetareextendedinthisstudy.Traditionaldoublingswithinatriadarecomparedtolesscommonorchestrationswithseparateinstrumentsoneachnote.Majorandminortriads(C5)wereorchestratedusingKontaktSilversampledoboe,clarinetandflutetonesandincorporateddoublingssuggestedbyorchestrationmonographs.Unisondoublingswereenhancedwithachoruseffectcreatedbyslightlymisaligningthedyadtones’fundamentalfrequencyandonsettime.Music‐majorsubjectsratedsetsoftriadsonsimilarity,degreeofblend,andparticipatedinidentificationtasksforthesopranoandbassinstruments.Perceptualspacesderivedfromthesimilaritydatacorrespondedwelltopreviousmultidimensionalscalingswherethefirst‐dimensiontriaddistributionwasrelatedtothetimbreinthebassofthetriad.Correlationswithlong‐time‐averagespectralcentroidwerehighforbothmajorandminorcontexts(r=.98and.94respectively).Calculationsofroughnessbasedononeoftheauthors’formulations,usingspectraltime‐frequencyreassignment,correlatedwellwiththemajorcontext’sfirstdimensionaswell.Higherblendratingswereobtainedformajorvs.minororchestrations;additionalanalysesrelateblendandidentificationtotimbralcombination.Inparticular,thesimilarityofclarinetandoboetimbresonG5,andtheircorrespondingspectralsimilarities,appearstoleadtoidentificationdifficulties.

Page 19: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:1925TheRelevanceofSpectralShapetoPerceptualBlendbetweenInstrumentTimbresSven‐AminLembke(1)*,StephenMcAdams(1)(1)CIRMMT,SchulichSchoolofMusic,McGillUniversity,Montreal,Canada*=Correspondingauthor,sven‐amin.lembke@mail.mcgill.caPreviousstudieshavesuggestedtheperceptualrelevanceofstablespectralpropertiescharacterizingthetimbreoforchestralwindinstruments.Analogoustohumanvoiceformants,theexistenceofstablelocalspectralmaximaacrossawidepitchrangehasbeenreportedfortheseinstruments.Furthermore,agreementofthese‘formantregions’betweeninstrumentalsoundshasbeensuggestedtocontributetotheperceptofblendbetweentimbres.Ouraimistoverifyandvalidatethesehypothesesbasedonatwo‐stageapproachcomprisingacousticalanalysisandperceptualtesting.Spectralanalysesarecomputedonabroadaudiosampledatabaseacrossallavailablepitchesanddynamiclevels.Basedontheobtainedspectralinformation,partialtonesareidentifiedandtheirfrequenciesandamplitudesusedtobuildglobaldistributionsofpartialsacrossallpitchesandbetweendynamiclevels.Acurve‐fittingprocedureappliedtothesedistributionsthenyieldsspectralprofilesfromwhichcharacteristicssuchas‘formantregions’areidentifiedanddescribed.Thiscanbecomplementedbysignalprocessingtechniquessuchaslinear‐predictive‐codingorcepstrumanalysestoattainparametricexpressionsforspectralshape.Thesecondstagetakesobtainedformantcharacteristicsandteststheirperceptualrelevanceinanexperimentemployingaproductiontask.Stimulidrawnfromtheaforementionedinstrumentalsoundsareusedasreferencesoundstowhichasynthesizedsoundexhibitingvariableformantpropertiesismatched.Participantsadjustsliderscontrollingsynthesisparametersinordertoachievethemaximumattainableblend,accuracybetweenproducedandacousticallydeterminedparametervaluestakenasdependentvariable.Besidesprovidingavalidationforthecontributionof‘formantregions’toperceptualblend,theexperiment’smultifactorialdesignallowstheirrelevancetobeinvestigatedacrossdifferentinstruments,pitchintervalsandregisters.Resultsfromthesetwostageswillprovideaclearerpictureofwhatperceptualblendcorrespondstoacousticallyandwouldfurthermorehelpexplainitsusageinorchestrationpractice.26UsingPercussiveSoundstoImprovetheEfficacyofAuditoryAlarmsinMedicalDevicesGlennPaul*andMichaelSchutzMcMasterInstituteforMusicandtheMind.Hamilton,OntarioCANADA*=Correspondingauthor,paulg2@muss.cis.mcmaster.cacaAuditoryalarmsareadesirablefeatureinmedicaldevices,allowingdoctorstomonitorpatients’vitalsignsbyearwhilesimultaneouslyobservingthemvisually.However,suchalarmsrelyonadoctor’sabilitytorememberassociationsbetweensoundsandmessages,ataskthathasprovendifficult(Block,2008;Sanderson,2009)evenwhenusingthesoundsrecommendedbytheInternationalElectrotechnicalCommission(IEC,2005).Onereasonforthisdifficultymaystemfromtheamplitudeenvelopes(i.e.“temporalshape”)ofthesesounds.Recentworkinourlabhasshownthatsoundswithpercussiveenvelopes(exhibitinganimmediateexponentialdecaywithnosustain)areadvantageousfortasksinvolvinglearnedassociations(Schutz&Stefanucci,2010).Therefore,wearenowworkingtoexplorethepossibilityofusingthemtoimprovetheefficacyofauditoryalarmsinmedicaldevices.Wehavedevelopedanewparadigmforthisendeavor.Inthefirst(study)phase,participantsreadascriptdescribingascenarioinwhichthealarmsmaybeused,andheareachtwice.Inthesecond(training)phase,participantsheareachoftheeightalarmsinturn,andareaskedtoidentifyit(withfeedback).Participantsrepeatthistraininguntiltheywereabletoidentifysevenoftheeightalarms.Afteradistractertask,participantsenterthethird(evaluation)phase,inwhichtheyaretestedontheirabilitytonameeachalarm.Ourpilotdataindicatethatthisparadigmisviableforassessingthreecrucialfactors:(1)theamountoftrainingrequiredtolearnthealarms,(2)thestrengthoftheassociationbetweentonesandalarms,and(3)confusionbetweensimilaralarms.Wefoundthatmusicaltraininghadasignificantinfluenceontheabilitytolearnthealarms,andarenowcollectingdatatoexplorewhetherenvelopemanipulationscanhelpimprovetheirefficacyinmedicalcare.

Page 20: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:20Keynotelecture:CognitiveFactorsShapeBrainNetworksforAuditorySkillsProfessorNinaKrausHughKnowlesProfessor(CommunicationSciences;Neurobiology&Physiology;Otolaryngology),NorthwesternUniversityWeknowthatmusiciansprofit fromreal‐lifeadvantagessuchasagreaterability tohearspeech innoiseand toremembersounds.Theextenttowhichtheseadvantagesareaconsequenceofmusicaltrainingorinnatecharacteristicsthatpredisposeagivenindividualtopursuemusictrainingisoftendebated.Here,weexaminetherolethatauditoryworkingmemoryplaysinhearingspeechinnoiseandpotentialbiologicalunderpinningsofmusicians’auditoryadvantages.Wepresentaframeworkthatemphasizescognitionasamajorplayerintheneuralprocessingofsound.Withinthisframework,weprovideevidenceforhowmusictrainingisacontributingsourceoftheseabilities.

Page 21: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2127Listeners’ImagesofMotionandtheInteractionofMusicalParametersZoharEitan(1)*,RoniY.Granot(2)(1)SchoolofMusic,TelAvivUniversity,Israel(2)DepartmentofMusicology,TheHebrewUniversityofJerusalem,Israel*=Correspondingauthor,[email protected]&Granot(2006;henceE&G)examinedhowlistenersmapchangesinmusicalparametersontoaspectsbodilymotion.Participantsassociatedmelodicstimuliwithimaginedmotionsofahumancharacterandspecifiedthemovementdirections,pacechangesandtypeofthesemotions.Stimuliconsistedofcontrastingpairsofmelodicfigures,manipulatingindependentlyloudnesschange,pitchdirection,tempo,andpitchintervalsize.Inthecurrentstudywebegintoexaminesystematicallytheeffectsofinteractionsbetweenmusicalparametersonmusic‐motionmappings.Twentybriefmelodicstimuli(3‐6seconds)werepresentedto78participants(35music‐trained).Participants’taskwasidenticaltothoseinE&G.Stimulisystematicallycombinedconcurrentchangesinfourmusicalparameters:4stimulicombinedloudnesschanges(crescendo/diminuendo)andpitchdirections(up/down),4combinedloudnessandtempochanges(accelerando/ritardando),4combinedpitchdirectionsandtempochange,and8combinedloudnesschange,pitchdirection,andchangesinintervalsize.Resultscorroboratethatdimensionsofmotionimagery,ratherthanexhibitingone‐to‐onemappingsofmusicalandmotionparameters(pitchheight,tempospeed,loudnessdistance),areaffectedbyseveralmusicalparametersandtheirinteractions.Thus,speedchangeassociatesnotonlywithtempo,butwithchangesinloudnessandpitchdirection(e.g.,participantsdidnotassociateanacceleratedstimuliwithincreasedspeedwhenloudnesswassimultaneouslyreduced);verticaldirection(rise/fall)isassociatednotonlywithpitchdirectionbutwithloudness(pitchascentsindiminuendowereassociatedwithspatialdescent);anddistancechangeisassociatednotonlywithloudnesschangebutwithpitchdirection.Moreover,significantinteractionsamongmusicalparameterssuggestthateffectsofsinglemusicalparameterscannotwhollypredictmusic‐motionmappings.Forinstance,bothloudnessandpitchandpitchandtemposignificantlyinteractinconveyingdistancechange.Thismultidimensionalviewofperceivedmusicalmotionmaybearimportantimplicationsformusicalmultimedia,sonification,andmusictheoryandanalysis.28CrossmodalAnaloguesofTempoRubatoFernandoBenadon(1)*,MadelineWinkler(1)(1)AmericanUniversity,WashingtonDC,USA*fernando@american.eduDolistenersperceivetemporubatoasconcomitantwithafeelingofeffort?Thirty‐sixparticipantswerepresentedwith12shortdescriptionsdepictingtheactionsofahypotheticalcharacternamedStickFigure.Eachscenewaspairedtoasoundtrackmelodythatwaseitherrubatoornon‐rubato.Usingthesoundtrackmelodyasacue,thetaskwastoratethelevelofeffortasexperiencedbyStickFigure.Forinstance,thescene“StickFigureisliftingabox;theboxis___”requiredparticipantstoprovideascorebetweenthesliderendpoints“verylight”(loweffort)to“veryheavy”(higheffort).Ascene’sefforttypewaseithermotor(e.g.,liftingalight/heavybox)orcognitive(e.g.,solvingasimple/difficultpuzzle).Atwo‐wayrepeatedmeasuresANOVAshowedasignificantmaineffectofrhythmtype(rubatovs.non‐rubato);therewasnosignificantdifferencebetweenthetwoefforttypes(motorvs.cognitive).Theseresultssuggestthattemporubatoiscrossmodallycorrelatedwithametaphoricalsenseofeffort.

Page 22: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2229TrainingofInstrumentRecognitionbyTimbreinNon­musicians:ARapidLearningApproachLisaAufegger(1)*,OliverVitouch(1)(1)CognitivePsychologyUnit,Dept.ofPsychology,UniversityofKlagenfurt,Austria*=Correspondingauthor,Lisa.Aufegger@uni‐klu.ac.atThequestionofwhetherattainingspecificcompetenciesinmusicispartoftheindividualhumanpotentialorratheracommonconsequenceofdeliberatepracticeisstillamuchdebatedissue.Whilemusicalabilitiesandtheirgeneticbasesarestillassociatedwithgiftedness,otherapproachesshowmusicalitytobetheresultofa10‐yearpreparationperiod,includingthousandsofhoursoftraininginthisparticularfield(Ericsson,Krampe,&Tesch‐Römer,1993).Inthistradition,the"rapid‐learningapproach"ofOechslin,Läge,&Vitouch(underreview)proposesthatanyindividualisabletoattainbasicandspecificcompetenciesinmusicperceptionapproachingtheperformancelevelofprofessionalmusiciansinaquickandreliablemannerbymeansofaspecifictrainingmethodology.Inthisstudy,therapid‐learningapproachwastestedforthecaseofinstrumentrecognitionbytimbreinnon‐musicians.N=34subjectshadtoindentify10soloinstruments(6woodwindand4brass)frombrieforchestralrecordingsinapre‐/posttestdesign.ThreePC‐basedtrainingunitswithamaximumdurationof90min.providedanintensiveexaminationofeachinstrumentviasimplefeedbackstrategies.Resultsshowedasignificantlyimprovedidentificationofinstrumentsintheposttest.Whencomparedto19musicstudents(windplayersandpianists)thesubjectsdidnotachieveexpertrecognitionlevel(ascomparedtowindplayers),butsemi‐expertrecognitionlevel(ascomparedtopianists).Giventheadequatemethodologyandusingafeedback‐basedapproach,non‐musiciansare,asdemonstrated,abletoexcel.Inthecontextofperceptuallearningtheremayindeedbeabroadandgeneralbasisforquicklyacquiringandimprovingfundamentalcompetenciesofmusicperception.30MorethanJustMusicalAbility:RegulatoryFitContributestoDifferencesbetweenMusiciansandNon­musiciansinMusicPerceptionJ.DevinMcAuley(1)*,MollyJ.Henry(2),AlanWedd(3)(1)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA,(2)MaxPlanckInstituteforHumanCognitiveandBrainSciences,Leipzig,Germany,(3)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA*=Correspondingauthor,dmcauley@msu.eduTwoexperimentsexaminedeffectsofregulatoryfitandmusicaltrainingonperformanceonarepresentativesubtestoftheMontrealBatteryofEvaluationofAmusia(MBEA).Inbothexperiments,participantsmadesame‐differentjudgmentsaboutpairsofmelodies,whileeithergainingpointsforcorrectanswers(againscondition)orlosingpointsforincorrectanswers(alossescondition).InExperiment1,participantsweretoldthatthetestwasdiagnosticoftheirmusicalabilityandthenwereaskedtoidentifythemselvesasa‘musician’ora‘non‐musician’,whileinExperiment2participantsweregiveneitherapromotionfocusprime(theyweretoldthattheyhadaperformance‐basedopportunitytogainentryintoaraffleattheendoftheexperiment)orapreventionfocusprime(theyweregivenaraffleticketatthestartoftheexperimentandneededtomaintainacertainlevelofperformanceinordertopreventlosingtheirentryintotheraffle).Consistentwitharegulatoryfithypothesis,non‐musiciansandpromotion‐primedparticipantsperformedbetterinthegainsconditionthaninthelossescondition,whilemusiciansandprevention‐primedparticipantsperformedbetterinthelossesconditionthaninthegainscondition.Experiment2additionallyrevealedthatregulatoryfiteffectswerestrongerformusiciansthanfornon‐musicians.ThisstudyisthefirsttodemonstratethatregulatoryfitimpactsperformanceontheMBEAandhighlightsthatindividualdifferencesinmotivationalorientationareimportanttoconsiderwheninterpretingmusicianperformanceadvantagesinmusicperception.

Page 23: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2331MusicalAptitudePredictsAbilitytoDiscriminateMusicandSpeechfromBackgroundNoiseJordanC.Schramm(1)&AnneE.Luebke*(1,2)(1)DepartmentofNeurobiology&Anatomy,(2)DepartmentofBiomedicalEngineering,UniversityofRochesterMedicalCenter,Rochester,NY14642,USA*=Correspondingauthor,[email protected]%ofAmericanadultsreportsomeofdegreedifficultyhearinginanoisyenvironment.Recentfindingssuggestmusicianswith>9yrsofmusicaltraininghaveanenhancedabilitytodiscriminatespeechinthepresenceofbackgroundnoises(Parbery‐ClarkA,SkoeE,LamC,KrausN.EarHear.2009).Wewonderediftrainedmusiciansalsohadenhancedabilitiestodiscriminatemusicinthepresenceofbackgroundnoises,andifsubjectswithhighmusicaptitudeswouldalsohaveenhanceddiscriminationinnoiseabilities?Totestthesehypotheses,werecruitedadultsbetween18‐29yrs.andtested:i)standardaudiometricthresholds;ii)speech‐in‐noiseintelligibilityusingtheHearing‐in‐NoiseTest(HINT);iii)musicperception‐in‐noiseabilityusingthesameprotocolusedfortheHINT,butreplacingthespeechwithquestionsfromtheMontrealBatteryforEvaluationofAmusia(MBEA);iv)musicalaptitudeusingthetonalimagerymelodyportionofthemusicalaptitudeprofile(MAPT1).Inaddition,allsubjectscompletedasurveyoftheirmusicaltrainingandlisteninghistory.Ourresultsconfirmpreviousfindingsthathighermusicalachievementcorrelateswithenhancedspeech‐in‐noiseabilities.Furthermore,wedeterminedthatsubjectswithhighmusicalaptitudesandlowmusicalachievementalsohadenhancedhearing‐in‐noiseabilities.Weconcludethatenhancedabilitytodiscriminatebothspeechandmusicinthepresenceofbackgroundnoiseisbetterpredictedbymusicalaptituderatherthanmusicalachievement.SupportedbygrantsfromNIH[DC003086(AEL),KL2RR024136(JCS)].32MusicProcessinginDeafAdultswithCochlearImplantsMathieuR.Saindon(1)*,SandraE.Trehub(1),E.GlennSchellenberg(1)(1)UniversityofToronto,Toronto,Canada*=Correspondingauthor,[email protected](CIs)providerelativelycoarserepresentationsofpitchandspectraldetail,whichareadequateforspeechrecognitioninquietbutnotforsomeaspectsofspeechprosodyandmusic.TheavailableresearchrevealsthatCIusershavedifficultydifferentiatingvocalemotionandthattheyfailtorecognizefamiliarmusiconthebasisofpitchcuesalone.Nevertheless,researchinthesedomainshasbeenrelativelylimited.Inthepresentstudy,weassessedarangeofspeechandmusicprocessingskillsin9successfulCIusersand12hearingcontrols.Themusictasksassessedtheperceptionofmeter,rhythm,pitchdirection,isochronousmelodies,timbre,andemotionaswellaspitchimitationandtherecognitionoffamiliarmusic.Thespeechtasksassessedtherecognitionofmonosyllabicwordsandvocalemotion.Overall,theperformanceofhearingindividualswassubstantiallybetterthanthatofCIusers,whoseperformancewashighlyvariable.ThethresholdforpitchdirectiondiscriminationforhearingadultswassubstantiallysmallerthanthatofCIusers,butthreeCIusersperformedaswellashearinglisteners.TwoCIusersmatchedpitcheswithincredibleaccuracy(<1semitone).Surprisingly,oneCIuserreadilyrecognizedisochronousmelodiesandfamiliarmusic,incontrasttotheverypoorperformanceofotherCIusersonthesetasks.OnlytwoCIusersperformedwellontimbrerecognition.AlthoughallCIusersperformedwellonmeterdiscrimination,mosthaddifficultywithrhythmdiscriminationinthecontextofaccompaniedmelodies,andoneexhibitederror‐freeperformance.Finally,CIusersperformedpoorlyatperceivingemotioninmusicandevenworseatperceivingemotioninspeech.Overall,ourfindingsareinlinewithpreviousfindingsonmelodyandprosodydiscrimination.However,someCIusers’unexpectedlyhighperformancelevelsimplythatCIsmaynotbeinsurmountablebarrierstomusicalpitchprocessing.

Page 24: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2433ContributionofHearingAidstoMusicPerceptionbyCochlearImplantUsersTonyaBergeson(1)*,NathanPeterson(1)(1)IndianaUniversitySchoolofMedicine,Indianapolis,USA*=Correspondingauthor,[email protected](CI)encodingstrategiesrepresentthetemporalenvelopeofsoundswellbutprovidelimitedspectralinformation.Thisdeficitinspectralinformationhasbeenimplicatedasacontributingfactortodifficultywithspeechperceptioninnoisyconditions,discriminatingbetweentalkers,andmelodyrecognition.OnewaytosupplementspectralinformationforCIusersisbyfittingahearingaid(HA)tothenon‐implantedear.Inthisstudy14postlinguallydeafadults(7withaunilateralCIand7withaCIandaHA(CI+HA))weretestedonmeasuresofmusicperception(MBEA)andfamiliarmelodyrecognition.CI+HAlistenersperformedsignificantlybetterthanCI‐onlylistenersonallpitch‐basedmusicperceptiontasks.TheCI+HAgroupdidnotperformsignificantlybetterthantheCI‐onlygroupinthetwotasksthatreliedondurationcues.RecognitionoffamiliarmelodieswassignificantlyenhancedforthegroupwearingaHAinadditiontotheirCI.Thisadvantageinmelodyrecognitionwasincreasedwhenmelodicsequenceswerepresentedwiththeadditionofharmony.Althoughbothgroupsscoredhigheronrhythmicteststhanonpitch‐basedtests,theydidnotappeartousethisinformationwellinidentifyingreal‐worldmelodies.Theseresultsshowthat,forCIrecipientswithaidablehearinginthenon‐implantedear,usingahearingaidinadditiontotheirimplantimprovesperceptionofmusicalpitchandrecognitionofreal‐worldmelodies.34InterspikeIntervals,Subharmonics,andHarmonyPeterCariani*Dept.ofOtologyandLaryngology,HarvardMedicalSchool*=Correspondingauthor,cariani@mac.comAverystrongcasecanbemadethattheauditorysystemutilizesatemporal,interspikeintervalcodefortheearlyrepresentationofperiodicityandspectrum(Cariani&Delgutte,JASA1996;Cariani,NeuralPlasticity,1999).Thepitchofharmoniccomplextonesatthefundamentalcorrespondstothemostnumerousall‐orderinterspikeintervalpresentintheauditorynerveatanygivenmoment.Asadirectconsequenceofphase‐lockingofspikes,all‐orderinterspikeintervaldistributionsindifferentfrequencyregionsoftheauditorynervereproducethesubharmonicseriesoftheindividualharmonicsthatdrivethem.Whenalloftheintervalsaresummedtogether,thoseassociatedwithcommonsubharmonics,i.e.thefundamentalanditssubharmonics,predominate,andthepitchassociatedwiththisintervalpatternisheard.Pitchstrengthisqualitativelypredictedbytherelativefractionofpitch‐relatedintervals.Inthecaseofnotedyads,inneurophysiologicalstudies(Tramoetal,NYAS,2001)andsimulations(Cariani,ICMPC,2004),thepredominantinterspikeintervalsproducethefundamentalbassofthemusicalinterval.Estimatedpitchstrengthsofthefundamentalbassesofdifferentdyads(e.g.16:15,4/3,45/32,3/2)reflectdegreeofpitchmultiplicity,fusion(Stumpf),stability,andharmonictension.Resultsfromauditorynervesimulationsoftriadsofharmoniccomplexesyieldedmajortriadsandsus‐4chordsasmoststable,followedbyminorandsus‐2chords,withaugmentedanddiminishedchordsastheleaststable.Thus,thereisthepossibilityofaneurally‐groundedtheoryofbasicharmonythatisbasedonsuperpositionsofsubharmonics(“undertones”).Musicalintervalsbearcharacteristicpatternsofsubharmonics.Inthepastsuchtheorieshavebeenproposed(Rameau,Riemann),butabandonedbecause,unlikeharmonics,subharmonicsaregenerallynotheard,anditwasbelievedthatthesubharmonicsdonotexisteitherintheacousticsorintheear.However,wenowknowfromauditoryneurophysiologythatsubharmonicseriesareubiquitousintheauditorynerve,andthatperhapsthesesortsoftheoriesdeservere‐examination.

Page 25: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2535TheMusicofSpeech:HeterometersandMelodicArchesStevenBrown(1)*,IvanChow(1),KyleWeishaar(2),JordanMilko(1)(1)DepartmentofPsychology,Neuroscience&Behaviour,McMasterUniversity,Hamilton,ON,Canada(2)DepartmentofLinguisticsandLanguages,McMasterUniversity,Hamilton,ON,Canada*=Correspondingauthor,stebro@mcmaster.caWorkonspeechrhythmhasbeennotoriouslyoblivioustodescribingactualrhythmsinspeech.Likewiseforworkonspeechmelody.Musicaltranscriptionprovidesajointsolutionforbothproblemsaswellasameansofexaminingtheinteractionbetweenrhythmandmelody.Afirststepinouranalysisofspeechistodivideasentenceintoafundamentalrhythmicunit,whatwerefertoasa“prominencegroup”,analogoustoameasureinmusic.Thedefiningfeatureofaprominencegroupisthatitalwaysbeginswithastrongbeat(i.e.,astressedsyllableinthecaseofEnglish).Contrarytoclassic“isochrony”modelsinspeech,wepositthatchangesinmeterarecentraltospeechrhythm,andthusthatspeechis“heterometric”ratherthanisochronous.Bythiswemeanthattherearelocalpocketsofmetricalityinsentencesbutthatmeterscanchangethroughoutthecourseofasentence,forexamplefromatriple‐basedtoaduple‐basedmeter.Afundamentalunitofspeechmelodyismoredifficulttodefine.However,classicworkinphonologysuggeststhatsentencesarebuiltupofaseriesofmelodicarchesorwaves,whereeacharchcorrespondswithanintonationalphrase.Thesearchesdiminishsuccessivelyinregisterandsizeasasentenceproceeds,reflectingageneralprocessofdeclinationinspeech.Finally,thismethodpermitsananalysisoftheinteractionbetweenrhythmandmelodyinspeech.Acommoninteractionisseeninthephrase“beautifulflowers”,witharisingcontouronbeautifulandafallingoneonflowers.Whileastressoccurson“flow‐”,themelodicarchiscenteredonthe“‐ful”ofbeautiful.Hence,thepeakofthemelodicarchtendstoprecedetherhythmicdownbeat,whichitselfisinstantiatedwithanintensityriseratherthanapitchriseinmanycases.36AreToneLanguagesMusic?RhythmandMelodyinSpokenCantoneseIvanChow(1)*,MatthewPoon(2),StevenBrown(1)(1)DepartmentofPsychology,Neuroscience&Behaviour,McMasterUniversity,Hamilton,ON,Canada(2)SchooloftheArts,McMasterUniversity,Hamilton,ON,Canada*=Correspondingauthor,stebro@mcmaster.caCantoneseisatonelanguagewith6leveltonesand3contourtones.Rhythmically,itiscategorizedasa“syllable‐timed”language,implyingthatCantonesesyllablesareofequaldurationthroughoutasentence.However,acloserlookattherhythmofspokenCantonesesentencesrevealsthatCantonesesentencesarenotsimplysequencesofsyllablesofequalduration,whichwouldsoundquite“robotic”toanativespeaker.Rather,therearepredictablerhythmicstructuresinCantonesesentencesthatresemblethoseofso‐called“stress‐timed”languageslikeEnglish,despitethefactthatCantonesehasnoword‐levelstress.Syllablesareorganizedintolargermetricunitssimilarto“measures”inmusic.Syllabicdurationswithinthesemetricstructurescanvaryaccordingtomusicalprinciples,includingtheformationofduplets,triplets,andpolyrhythmsatthewordlevel.Inaddition,meterscanchangewithinasentence,suchasfromdupletotriplemeters.Atthesametime,rhythmicstructuresmakereferencetosyntaxandtosemanticfocus.Regardingmelody,thenaiveviewpointisthatlexicaltonesestablishmusicalscalesintonelanguages,suchthattonescorrespondwithfixedtone‐levels.However,thisisanythingbutthecase,astonelanguagesclearlydemonstratethegeneralphenomenonofdeclination,wherebythepitchlevelofthesentencedeclinesfrombeginningtoend.Therefore,evenaCantonesesentencecomprisedexclusivelyofhighlevel‐tonescanshowanintervallicdeclinationofafifthfromthefirstsyllabletothelast.Rhythmandmelodyinteractinsuchawaythatwhenthesyllablesofwordsarecompressedintodupletsandtriplets,eachrhythmicgroupformsanarch‐likemicro‐melodythatinteractswiththelexicaltonesoftheword:whiletheendoftherhythmicgroupismarkedwitha“finaldrop”,anupwardtrendisseenacrossthefirsthalfofthegroupbeforethedropbegins.

Page 26: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2637Whenisspeechmusical?Whyweneedconceptsofmeter,melody,andmotiftounderstandprosodyinspokenlanguageLauraDilley*DepartmentofCommunicativeSciencesandDisorders,MichiganStateUniversity,EastLansing,MI,USA*=Correspondingauthor,ldilley@msu.eduParallelsbetweenspeechandmusichaveintriguedscholarsforcenturies.However,linguistictheoriesofspeechprosodyhavenotalwaysemphasizedprobableparallelsbetweenmusicandlanguagedomains,leadingtoadegreeofdisconnectbetweenmusicandlinguisticsproper.Thistalkhighlightsresearchthatillustratesandbuildsonparallelsbetweenspeechandmusic.First,theRhythmandPitch(RaP)transcriptionsystemforspeechprosodywasrecentlydevelopedtoincorporatekeytheoreticalinsightsofthewell‐knownautosegmental‐metricaltheoryoflinguistictonewhilegrantingprimacytomanymusic‐inspiredconcepts(e.g.,melodiccontour,rhythmicregularity,repetitionofmotifs).AdescriptionoftheRaPsystemforspeechannotationwillbepresented,includingitsusesfortranscribingrhythmicprominences,phrasalboundaries,pitchaccents,andperceptualisochrony(i.e.,rhythmicregularity).Second,researchwillbehighlightedthatinvestigatestheroleinspokenlanguageofpitchandtimingrepetition,aconceptthatisparamounttomusicaldescription.Inparticular,experimentalfindingswillbepresentedthatsuggestthatwhenpitchrepetitionand/orperceptualisochronyoccurinspeech,listenersgeneraterhythmicandgroupingexpectationsthathavesignificanteffectsontheprocessingofsubsequentspokenwords.Thesefindingscanbemodeledintermsof(a)endogenousoscillatorsthatentraintoauditoryeventsintheenvironment,therebyaffordingpredictionsaboutthetimingandmetricalorganizationofsubsequentevents,aswellas(b)ahierarchicalmetricalstructureforspeechrhythmthatspecifiestherelativeprominencesofphonologicalunits.Overall,thisresearchillustratestheimportanceofmusicalconceptsforinvestigatingtheprocessesinvolvedinspokenlanguageunderstanding,aswellastheusefulnessofmusic‐inspiredtranscriptionalapproaches.38RhythmicproductionofspeechandotherbehaviorsRobertPort*DepartmentofLinguistics,IndianaUniversity,Bloomington,IN,USA*=Correspondingauthor,port@indiana.eduHumanspeechisfrequentlyproducedrhythmically.Thismeansthatvowelonsets,andespeciallystressedvowelonsets,willoccuratnearlyperiodictimeintervals.Undermanycircumstances,speakersinmostlanguagescanautomaticallybeencouragedtoadoptrhythmicaltiminginspeechproduction.Suchperiodictiminginspeechissimilartoperiodicandmetricalpatterninginmanyotherkindsofmovementsofthefingers,hands,arms,legsandtrunk.Dynamicalsystemmodelsaccountformanyaspectsofthesekindsofbehavior.Itseemslikelythatsuchcouplingbehaviorshaveplayedaroleincementinginterpersonalbondswithinhumangroupssincebeforethehistoricaldevelopmentofspokenlanguage.

Page 27: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2739ValidatingEmotionally­RepresentativeMusicalSelections:RelationshipBetweenPsychophysiologicalResponse,PerceivedandFeltEmotionLauraA.Mitchell(1)*,AnnaM.J.M.Paisley(2)andDanielJ.Levitin(3)(1)Bishop’sUniversity,Sherbrooke,Canada(2)GlasgowCaledonianUniversity,Glasgow,UK(3)McGillUniversity,Montreal,Canada*=Correspondingauthor,lauramitchellwork@gmail.comUnderstandingtherelationshipbetweenmusicandemotionremainsacentralissueinmusicpsychologyresearch.Thisincludesinvestigatingtherecognitionofmusicalemotion(e.g.Quintinetal.,inpress),thereal‐timecontinuousphysiologicalreactionstomusic(e.g.,Chapados&Levitin,2008),aswellassubjective/introspectivereportsoftheemotionsthatmusicinduces(e.g.Vinesetal.,inpress).Inawidevarietyofresearcharenas,includingresearchthatisnotaddressedtowardmusicalprocessingperse,musicisfrequentlyusedasameansofemotionalinduction,makingaccessibilityofpre‐testedstimulifundamentalforconsistencybetweenstudies.Vieillardetal.(2008)establishedasetofstimuliusingspecially‐composedmusicconveyingemotionthroughmusicalstructure,findingthatfourspecificemotionsofhappy,sad,peacefulandscarycouldbedistinguished.Toencourageecologicalvalidityofstimuli,thecurrentstudyprovidesself‐reportdataofperceivedmusicalemotion,feltemotion,likingandvalenceandpsychophysiologicalmeasurementfor20easilyavailablepiecesofmusicgroupedbyfouremotionalqualities.Thetwo‐minuteinstrumentalpieceswereselectedthroughextensivepilotingforbeingreliablyidentifiedwiththeemotionspeaceful,scary,happyandsad.Thirty‐sixparticipants(20females,meanage29.3)listenedindividuallyonheadphonestoall20selectionsinonesession,randomizedwithinfouremotionalblockscounterbalancedinorder.Heartrate,respirationrateandskinconductancewererecordedusingtheBiopacMP35system,andsubjectsusedthreeminutesbetweentrackstoratefamiliarity,liking,pleasantness,perceivedemotionalexpressionandfeltemotionalresponseonavisualanaloguescale.Musicianshipandeverydaylisteninghabitswererecordedbyquestionnaire.Resultswillbeanalyzedusinglinearmixedeffectsmodellingtoinvestigateeffectsofemotionalcondition,perceivedandfeltemotionandmusicperceptionvariablesonpsychophysiologicalresponse,whiletakingintoaccountordereffectsoflistening.40EnjoyingSadMusic:ATestoftheProlactinHypothesisOliviaLadinig,DavidHuron*,andCharlesBrooksTheOhioStateUniversity,Columbus,Ohio,USA*=Correpondingauthor,huron.1@osu.eduThisstudyteststhehypothesisthatenjoymentofsadmusicismediatedbythehormoneprolactin.Prolactinisreleasedunderconditionsofstress(suchascrying)andisknowntohaveacomfortingpsychologicaleffect(Brody&Kruger,2006).Huron(2010)conjecturedthatprolactinmightbereleasedduringmusic‐inducedsadness,andthatwhenthesadnessis"fictional"(asinartisticcontexts),somelistenerswillbenefitfromprolactin'spositivehedoniceffectswithoutthepsychicpainassociatedwithtruegrieforsadness.Specifically,thisstudyteststhefollowinghypothesis:Forthoselistenerswhoreportenjoyinglisteningtosadmusic,sadmusicwillcauseanincreaseinserumprolactinconcentrations;whereasthoselistenerswhoreportnotenjoyingsadmusicwillshowlittleornoincreaseinprolactinwhenlisteningtosadmusic.Participantshadbeenpre‐selectedandweredrawnfromtwogroups:listenerswhoprofessedtohaveastronglikingforsadmusicandlistenerswhoprofessedastrongdislikeforsadmusic.Participantslistenedtosadandhappymusicwhileprolactinconcentrationsweremeasured.Toavoidlistener‐specificassociations,sadandhappymusicalmaterialswerepre‐selectedusingindependentlisteners.Twocontrolswereused.Thefirstcontrastedmusicalexcerptswithsilentperiods.Thisallowsacomparisonbetweenprolactinlevelsarisingfrommusicalexposureandbaselineprolactinlevelsintheabsenceofmusic.Thesecondcontrolcontrastedsadmusicwithnon‐sadmusic.Thistestswhetherprolactinconcentrationsarespecificallyrelatedtosadmusic,andnotmerelyaresponsetomusicingeneral.TheresearchispertinenttoaproblempostedbyAristotleregardinghowpeoplemightenjoyartisticprotrayalsofnegativeemotions.

Page 28: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2841ValidatingEmotionally­RepresentativeMusicalSelections:RelationshipBetweenPsychophysiologicalResponse,PerceivedandFeltEmotionLauraA.Mitchell(1)*,AnnaM.J.M.Paisley(2)andDanielJ.Levitin(3)(1)Bishop’sUniversity,Sherbrooke,Canada(2)GlasgowCaledonianUniversity,Glasgow,UK(3)McGillUniversity,Montreal,Canada*=Correspondingauthorlauramitchellwork@gmail.comUnderstandingtherelationshipbetweenmusicandemotionremainsacentralissueinmusicpsychologyresearch.Thisincludesinvestigatingtherecognitionofmusicalemotion(e.g.Quintinetal.,inpress),thereal‐timecontinuousphysiologicalreactionstomusic(e.g.,Chapados&Levitin,2008),aswellassubjective/introspectivereportsoftheemotionsthatmusicinduces(e.g.Vinesetal.,inpress).Inawidevarietyofresearcharenas,includingresearchthatisnotaddressedtowardmusicalprocessingperse,musicisfrequentlyusedasameansofemotionalinduction,makingaccessibilityofpre‐testedstimulifundamentalforconsistencybetweenstudies.Vieillardetal.(2008)establishedasetofstimuliusingspecially‐composedmusicconveyingemotionthroughmusicalstructure,findingthatfourspecificemotionsofhappy,sad,peacefulandscarycouldbedistinguished.Toencourageecologicalvalidityofstimuli,thecurrentstudyprovidesself‐reportdataofperceivedmusicalemotion,feltemotion,likingandvalenceandpsychophysiologicalmeasurementfor20easilyavailablepiecesofmusicgroupedbyfouremotionalqualities.Thetwo‐minuteinstrumentalpieceswereselectedthroughextensivepilotingforbeingreliablyidentifiedwiththeemotionspeaceful,scary,happyandsad.Thirty‐sixparticipants(20females,meanage29.3)listenedindividuallyonheadphonestoall20selectionsinonesession,randomizedwithinfouremotionalblockscounterbalancedinorder.Heartrate,respirationrateandskinconductancewererecordedusingtheBiopacMP35system,andsubjectsusedthreeminutesbetweentrackstoratefamiliarity,liking,pleasantness,perceivedemotionalexpressionandfeltemotionalresponseonavisualanaloguescale.Musicianshipandeverydaylisteninghabitswererecordedbyquestionnaire.Resultswillbeanalyzedusinglinearmixedeffectsmodellingtoinvestigateeffectsofemotionalcondition,perceivedandfeltemotionandmusicperceptionvariablesonpsychophysiologicalresponse,whiletakingintoaccountordereffectsoflistening.42PiecevsPerformance:ComparingCoordinationofAudiences'PhysiologicalResponsestoTwoPerformancesofArcadelt's"Ilbiancoedolcecigno"FinnUpham(1)*,StephenMcAdams(1)(1)CentreforInterdisciplinaryResearchinMusicMediaandTechnology,McGillUniversity,Montreal,Canada*=Correspondingauthor,[email protected]'emotionalexperience?Whilethenotesareimportant,performancedetailsmayalsohaveastrongeffect.Bylookingatphysiologicalresponsestotwodifferentperformancesofthesamework,weconsiderwhichsharedresponsesmightbeduetothecommonmusicalstructureandwhichmightdependontheperformers'interpretationsoftheworkorotherdifferentiatingconditions.Twoaudiences'biosignalswerecontinuouslyrecorded.Oneaudienceof63participantslistenedtoastudiorecordingofaprofessionalmalechoirperformingJacquesArcadelt'sIlbiancoedolcecignoplayedoverlargestereospeakers.Thesecondaudienceof40participantsattendedaliveperformanceofthesameworkbyasemiprofessionalmixedchoir.Thetwoaudiencessharedagrouplisteningexperienceofthesamemusicalwork,butdifferedinparticipants,hallandperformance.Toquantifytheaudiences'responsetothemusic,weexaminethecoordinationofstrongresponsesacrosseachaudienceinfeaturesextractedfromthefourbiosignals:skinconductance(SC),electromyographyofthezygomaticus(EMGz),electromyographyofthecorrugator(EMGc)andbloodvolumepulse(BVP).Inthosemomentswhentheaudiencesshowsimilardegreesofcoordinatedphysiologicalactivity,thismaybecausedbythesimilaritiesbetweenthetwostimuli.ResultsshowsimilarpatternsofhighandlowlevelsofsimultaneousactivityincorrugatorandzygomaticusEMGsignalsandskinconductanceresponseinbothaudienceswhenresponsesarealignedtothescore.Somedifferencesinskinconductanceresponsecoordinationalignwithdifferencesinthetwointerpretations,particularlycontrastingdynamics.Concurrentcorrugatorandzygomaticusactivitychallengetheoppositevalenceinterpretationofthesesignals.Theseresultssuggestthatthecorrugator,amuscleinthebrow,maybeanindicationoftensionratherthanunpleasantness.

Page 29: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:2943RagasofHindustaniclassicalmusicandtempoonappraisalofhappyandsademotion:AdevelopmentalstudyShantalaHegde(1)*,BhargaviRamanujam(1),AryaPanikar(1)(1)CognitivePsychology&NeuroscienceLaboratory,CognitivePsychologyUnit,CenterforCognitionandHumanExcellence,DepartmentofClinicalPsychology,NationalInstituteofMentalHealthandNeurosciences,Bangalore,India*=Correspondingauthor,[email protected]

Studiesexaminingemergenceofsensitivityforcomponentsofmusicinappraisalofemotionareveryfew.WeexaminedtheroleofpitchdistributioninragasofHindustaniclassicalmusic(HCM)andpresence/absenceofpulseinappraisalofhappyandsademotion.Sampleincludedmusicallyuntrainedchildrenaged5‐6years(n=30),10‐11years(n=39).Datawascomparedwithratingsbymusicallyuntrainedadults(n=30).SixragasofHCM,threetoevokepositiveemotion(happy‐ragas)andthreetoevokenegativeemotion(sad‐ragas)Twoexcerptsfromeachraga,onefromtheraga‐elaborationphase(Alap)withoutpulseandtheother(Jor‐Jhala)withpulse(~64pulsesperminute)formedthestimulus.Timbrewaskeptconstant.ParticipantsratedthestimulusonafivepointLikertscale.Childrenaged5‐6yearscoulddifferentiatethehappy‐ragasfromsad‐ragasbasedonthedegreeofhappiness(p<0.01level).Tempodidnotinfluencetheirratings.Childrenaged10‐11yearscoulddistinguishthetwosetsofragasabovelevelofchance(p<0.01)similartotheadultgroup.Happy‐ragaexcerptswithtempowereperceivedashappierthanexcerptswithoutpulseandsad‐ragaexcerptswithoutpulsewereperceivedassadderthanexcerptswithtempo(p<0.01).ThisisthefirststudyexaminingappraisalofmusicalemotionusingragasofHCMinmusicallyuntrainednativeIndianchildrenandtoexaminetheinfluenceofpresenceorabsenceofpulsatedtempoonappraisalofmusicalemotion.Distributionofmajor(Shuddh‐swaras)andmajorpitches(komal‐swaras)inragainfluencedappraisalofemotion.Presenceofpulseseemedtoinfluencetheperceptionofemotiondifferentlyforthetwoemotions.Findingsofthisstudywillenableustounderstanddevelopmentofabilitiestoappraisemusicalpropertiesandappraisetheemotionalcontentevenwithoutformalmusicaltraining.(Grantfunding:DepartmentofScienceandTechnologySR/FT/LS‐058/2008,India.)

44Literacymakesadifference:Across­culturalstudyonthegraphicrepresentationofmusicbycommunitiesintheUnitedKingdom,Japan,andPapuaNewGuineaGeorgeAthanasopoulos(1)*,NikkiMoran(2),SimonFrith(3)(1)UniversityofEdinburgh,Edinburgh,U.K.,(2)UniversityofEdinburgh,Edinburgh,U.K.,(3)UniversityofEdinburgh,Edinburgh,U.K.,*=Correspondingauthor,[email protected]‐culturalresearchinvolving102performersfromdistinctculturalbackgroundsobservedhowmusiciansengagewiththetextualrepresentationofmusic,consideringinparticulartheeffectofliteracy.Theprojecttookplaceatfivefieldworksitesinthreecountries,involvingclassicalmusiciansbasedintheUK;traditionalJapanesemusiciansbothfamiliarandunfamiliarwithwesternstandardnotation;andmembersoftheBenaBenatribe,anon‐literateruralcommunityinPapuaNewGuinea.Performerresponseswereexaminedintwostudieswhichusedoriginalvisualandauditorystimulitoexploredistinctionsbetweenculturalandmusicalfactorsofthevisualorganizationofmusicalsounds.Participantsheardupto60shortstimulithatvariedonthreemusicalparameters(pitch,durationandattackrate).Theinstructionsweresimplytorepresentthesevisuallysothatifanothercommunitymembersawthemarkstheyshouldbeabletoconnectthemwiththesounds.2.Aforced‐choicedesignrequiredthatparticipantsselectthebestshapetodescribeasound(24trials).Additionally,ethnographicinterviewswerecarriedoutatfieldworksitestoprovidericher,qualitativedataregardingtheparticipants’responsetotheresearch.Threestylesofsymbolicrepresentationemerged:linear‐notational(x‐yaxialrepresentation,withtimelocatedonxaxisandvariableparameteronyaxis);linear‐pictorial(axialtimeindication,variableparameterrepresentedpictorially);andabstract‐pictorial(noaxialrepresentation).Resultsfor1)showedthatover90%literateparticipantsusedlinear‐notationalrepresentation.Ofthenon‐literateparticipants,just22.3%ofresponseswerelinear‐notational,but39.3%werelinear‐pictorial.Theremainder(38.4%)consistedofabstract‐pictorialresponses.2)Again,over90%oftheliterategroups’responsesshowedapreferenceforlinear‐notational.Notrendsemergedinthenon‐literategroup’sresponses.Resultswillbediscussedwithreferencetoqualitativedatafromthecontentanalysisoftheethnographicmaterial.

Page 30: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3045Cross­culturaldifferencesinmeterperceptionBesteKalender*,SandraE.Trehub,&E.GlennSchellenberg.UniversityofToronto,Ontario,Canada*=Correspondingauthor,bestekalender@yahoo.comTheperceptionofmetricalpatternsisinfluencedbymusicalenculturation.Forexample,listenerstapmoreaccuratelytoculturallyfamiliarthantounfamiliarpopmusicthatissimilarinmetricalstructure.Musicalenculturationcanalsointerferewiththeperceptionofnovelmetricalcategories.Weaskedwhetherexposuretocomplexmetersfromonemusicalculturefacilitatestheperceptionofcomplexmetersfromaforeignmusicalculture.Theparticipants(n=57)weremonolingualandbilingualadultswithexclusiveexposuretoWesternmusic(simplemeters)orexposuretonon‐Western(simpleandcomplexmeters)andWesternmusic.AdultsweretestedwithMIDIinstrumentalversionsofTurkishfolkmelodies,twoinsimplemeter(2/4)andtwoincomplexmeter(5/8).Afterfamiliarization(2min)witheachmelody,participantsheardameter‐preservingalteration,andameter‐violatingalteration(randomorder)andratedtherhythmicsimilarityofeachtothefamiliarizationstimulus.Scoresconsistedofratingdifferencesbetweenmeter‐preservingandmeter‐violatingalterations.Simple‐meterscoresexceededchancelevelsforbimusicallisteners,t(23)=4.71,p<.001,andformonomusicallisteners,t(32)=5.60,p<.001,butcomplex‐meterscoresexceededchancelevelsonlyforbimusicallisteners,t(23)=2.78,p<.05.Inshort,onlylistenerswithexposuretocomplexmetersdetectedchangesincomplexmetricalpatternsfromanunfamiliarmusicalculture.Multipleregressionanalysisrevealedasignificanteffectofmusicbackground(monomusicalorbimusical),t(54)=2.82,p<.01,butnoeffectoflanguagebackground(monlingualorbilingual).Specifically,musicalenculturationinfluencedperformancewithcomplexmeterswhenlanguagestatus(monolingual,bilingual)washeldconstant,butlanguagestatushadanegligibleeffectonperformancewhenmusicalbackgroundwasheldconstant.Moreover,musicaltrainingdidnotaccountfortheadvantageofbimusicallisteners.Inshort,thefindingsrevealtransferoftrainingfromafamiliartoanunfamiliarmusicalculture.46AuditorystructuralparsingofIrishjigs:TheroleoflistenerexperienceChristineBeckett*ConcordiaUniversity,MontrealQCCanada*=Correspondingauthor,[email protected](Beckett,2009)showedthatfewperceptualstudieshaveexaminedlisteners’experiencesofIrishmusic.Inrecentperception/productionwork,itwasfoundthatafter30mlistening,listenerscancreatenoveljigsbutthatnon‐musiciansreproduceformalstructurelessaccuratelythanmusicians(Beckett,2010).Thisraisedthequestionofhowparticipantsperceivedstructure.Here,48participantsin4groups(musicians/non‐musicians,withlow/highlisteningexperienceofIrishmusic)heard7unaccompaniedIrishjigstwiceeach(4traditionaljigsonflute,3noveljigssung).Afterthesecondhearing,participantsusedacomputertooltopinpointdivisionsbetweenwhattheyheardasstructuralunitsofthemelody.Datawererecordedinmsoflapsedmusic.Timelineanalysisshowedthatmusiciansandnon‐musicianswithlowexposuretoIrishmusicmostlymarkedlargeregularunitssimilartotypicalEuro‐westernphrases(4or8bars).Musiciansandnon‐musicianswithhighexposuretoIrishmusictendedtomarkshorter,morenumerous,andsomewhatirregularunits(2bars,1bar,orless).Resultsweretosomeextentdrivenbydiscrepantterminology,especiallyamongstIrishperformers.Replicationwithmusiciansandnon‐musiciansinIreland,allhighlyfamiliarwiththeirowntraditionalmusic,isplannedforMarchandApril2011.AnemergingconclusionisthatincreasingfamiliaritywithIrishtraditionalmusicleadstomorefine‐grainedperceptionandjudgmentsofstructure.Incontrast,increasingfamiliaritywithEuro‐westernmusicarguablyleadstoperceivingever‐largerunitsofstructure.Animportantimplicationisthatwhenworkingwithanothermusicaltradition,researchersmustnotassumeWesternnormsasvalidforpeopleexperiencedinthattradition.

Page 31: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3147HowDoesCultureAffectPerceptionofRhythm?NareshN.Vempala(1)*,FrankA.Russo(1)(1)SMARTLab,DepartmentofPsychology,RyersonUniversity,Toronto,Canada*=Correpondingauthor,[email protected]

Previousresearchsuggeststhatlistenersinternalizethemusicoftheirculturetocreateaconceptualrepresentationthatreflectsstatisticalregularities(Bharucha,1987;Krumhansl,1987).Whenlisteningtoapieceofmusic,listenersusetheirexistingmentaltemplateasshapedbytheirmusicalexperience.Theirenculturatedrepresentationofrhythmandmeterinteractswithavailablesensoryinformationgatheredfromthestimulustoinfluencetheperceptionofstructure(Palmer&Krumhansl,1990).Ourgoalforthisstudywastounderstandhowmusicalenculturationcouldaffectalistener’ssensitivitytorhythmicstructureconveyedbyintensityaccents.Weexaminedgoodness‐of‐fitjudgmentsofprobesindifferentrhythmiccontextsacrosstwodifferentculturalgroups,CanadiansandEcuadorians.Wepresentedfourdifferenttypesofrhythmicstimulicomprisingsymmetricandasymmetricrhythmicgroupingstobothsetsofparticipants.Ourhypothesiswasthatculturewouldinfluenceperceptionofrhythmicgroupingsleadingtoaninteractionofprobewithcultureforeachtypeofrhythmicgrouping.BecauseCanadianswouldhavepredominantlybeenenculturatedtosymmetricgroupings,wepredictedthemtoshowsensitivitytotheintensityaccentstructurepresentinsymmetricrhythmicstimuli.Incontrast,becauseoftheirenculturationtoasymmetricrhythmicgroupings,wepredictedthatEcuadorianswouldshowsensitivitytotheintensityaccentstructurepresentinasymmetricrhythmicstimulibutnotinsymmetricstimuli.OurresultsshowedthatCanadianbutnotEcuadorianparticipantsweremoresensitivetosurfacestructureinsymmetricrhythmicgroupingsthantoasymmetricrhythmicgroupings.Ourresultsindicatedthestrongeffectofinternalizedrhythmicschemasasaproductofenculturation,inperceivingrhythmicstructure.Basedontheresultsofthisstudy,weproposeaninitialsetofrulesfordevelopingatheoreticalmodelofrhythmicperceptionthatfocusesontheinfluenceofpreviousenculturation.48IndividualDifferencesinNeuronalCorrelatesofImaginedandPerceivedTunesSibylleC.Herholz(1),AndreaR.Halpern(2)*,RobertJ.Zatorre(1)(1)MontrealNeurologicalInstitute,McGillUniversity;InternationalLaboratoryforBrain,MusicandSoundResearch(BRAMS);CentreforInterdisciplinaryResearchinMusicMediaandTechnology(CIRMMT);(2)DepartmentofPsychology,BucknellUniversity*=Correspondingauthor,ahalpern@bucknell.eduWeinvestigatedauditoryimageryoffamiliarmelodiestodeterminetheoverlappinganddistinctbrainareasassociatedwithauditoryimageryandperception.Wealsostudiedindividualdifferencesinauditoryimageryvividness.Wescannedtenhealthyright‐handedpeopleusingsparsesamplingfMRI.TheBucknellAuditoryImageryScale(BAIS)assessedindividualabilitytoevokeandmanipulateauditoryimages.Duringencoding,participantslistenedtoorimaginedfamiliarmelodies,cuedbykaraoke‐likepresentationoftheirlyrics.Asilentvisualconditionservedasbaseline.Duringrecognitiontunetitlesservedascues.Conjunctionanalysisforencodingshowedoverlapofactivityduringimageryandperceptioninanteriorandposteriorpartsofsecondaryauditorycortices.Thecontrastofimageryvsperceptionrevealedanextensivecorticalnetworkincludingprefrontalcortex,supplementarymotorarea,intraparietalsulcusandcerebellum.BAISscorescorrelatedwithactivityinrightanteriorsuperiortemporalgyrus,prefrontalcortexandanteriorcingulate.AuditorycorticesandSMAshowedgreaterfunctionalconnectivitywithfrontalareasduringimagerycomparedtoperception.Duringrecognition,inferiorfrontalcortexandanteriorcingulatecortex,aswellasleftSTS,wereactivecomparedtobaseline.BAISscorescorrelatedwithactivityinleftACC,anteriormiddletemporalsulcusandprefrontalcortex.Theoverlapofactivityinsecondaryauditoryareasduringmusicalimageryandperceptionindicatesthatsimilarneuronalnetworksareinvolvedinbothprocesses.Inaddition,duringimagery,areasimplicatedinworkingmemory,mentalmanipulation,andmotorpreparationarerecruited.Reportedvividnessduringencodingwasdirectlyrelatedtoactivityinsecondaryauditorycortex,indicatingthatvividnessofimageryisrelatedtoprocessingareasfortheinputmodality.Thecorrelationduringretrievalsuggeststhatmorevividimagersusethatabilitytofacilitatememoryretrieval.Thatthecorrelationwasobtainedusingaquestionnairesuggeststhatpeople’simpressionoftheirownabilitiesmatchestheneuralresourcestheydevotetothesetasks.

Page 32: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3249NeuralSpecializationforMusicSamuelNorman‐Haignere(1)*,JoshMcDermott(2),EvelinaFedorenko(1),NancyKanwisher(1)(1)McGovernInstituteforBrainResearch,MassachusettsInstituteofTechnology,Cambridge,USA,(2)CenterforNeuralScience,NewYorkUniversity,USA*=Correspondingauthor,svnh@mit.eduDospecializedneuralmechanismsexistinthehumanbrainforprocessingmusic?HereweusefMRItoaddressthisquestionbysearchingforregionsinindividualsubjectswithastrongresponsetomusiccomparedwithspeechstimuli,andthentestingtheresponsesoftheseregionstomanipulationsofmusicalandlinguisticstructure.Sixsubjectswerescannedfor2hourswhilelisteningtoshort,9‐secondclipsofmusicandspeech.Musicstimuliconsistedofprofessionallyrecordedinstrumentalmusicthatvariedbothingenre(classicalvs.movie/tvscores)andfamiliarity(famousmusicvs.matchednovelcontrols).Speechstimuliconsistedofpairsofspokensentencesandnonwordstringsmatchedforprosody,therebyallowingustoalsoexaminedifferencesinhigher‐levellinguisticstructure.MIDIrepresentationsofmusicwereusedtoexamineresponsestomusicalstructurebyscramblingthehigher‐orderpitchandrhythmicorganizationinhalfoftheclipswhilepreservinglower‐levelacousticproperties.Inallsixsubjectstested,weobservedaregionintheanteriorpartofthesuperiortemporalgyruswithastrong,invariantresponsestoall4typesofmusiccomparedwithspeech.ThisregionalsoshowedahigherresponsetointactcomparedwithscrambledMIDImusic,butnodifferencebetweennormalsentencesandnonwordstrings.Theseresultssuggestthatmusicperceptionmaybeinpartsubservedbyspecializedneuralmechanismsthataresensitivetomusicalstructure.Currentworkisexploringtheselectivityofthisregiontoothertypesofmusicandauditorystimuliaswellastomoregeneralauditoryfeaturessuchaspitch.50MusicasanaidtolearnnewverbalinformationinAlzheimer'sdiseaseMoussard.A.(1,2)*,Bigand,E.(2),&Peretz,I.(1)(1)UniversitédeMontréal,Montréal,Canada,andBRAMSlaboratory;(2)UniversitédeBougogne,Dijon,France,andLEADlaboratory.*=Correspondingauthor,Aline.Moussard@u‐bourgogne.frThegoalofthisstudyistodetermineifmusiccanimproveverbalmemoryinAlzheimer'spatients.TherearemultipleobservationsandanecdotessuggestingthatmusicisaprivilegedmaterialtosupportandstimulatememoryinAlzheimerpatients.Thereisasyetnoexperimentalevidenceandhencenoexplanationofhowandwhymusiccouldbeusefultoindividualswithdementia.Innormalsubjects,verbalinformation(i.e.,wordsortexts)isnotnecessarilybetterlearnedwhenitissungascomparedtobeingspoken.Themelodiccomponentoftenrepresentsanadditionalloadforlearning,creatingadualtasksituation(Racette&Peretz,2007).However,ifthetuneis‐orbecomes‐familiar,thepreexistingrepresentationofthefamiliarmelodycanprovidesupportforassociatingnewverbalinformation(Purnell‐Webb&Speelman,2008).Ourstudycompare,inamildAlzheimer'sdiseasepatient,differentlearningconditionsofanunknowntext:spoken,andsungonanunfamiliar,mildfamiliar,andhighfamiliarmelody.Moreover,thespokenandunfamiliarsungconditionsarerelearned4timesagain(withoneweekintervalbetweeneach),andonceagainafter4weeks.Theresultsshowthattheexcerptssungonamildorhighlyfamiliarmelodyarebetterlearnedthantheunfamiliarone.Moreover,whilethespokenconditionseamstobebetterperformedthantheconditionsungontheunfamiliarmelodyforthefirstlearningsession,thisseamstobereversedovertherelearningsessions,especiallyafterthe4weeksdelay,wherethesungconditionbecomesignificantlybettermemorized.Wediscusstheseresultsdependingontheirtherapeuticimplicationsforclinicalcareofdementia.

Page 33: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3351ExaminingtheRoleofTrainingandMovementonRhythmPerceptioninDiscJockeysUsingEEGandBehaviouralThresholdsBlakeButler(1)*,LaurelTrainor(1)(1)McMasterUniversity,Hamilton,Canada*=Correspondingauthor,[email protected](DJs)maintainaconsistentbeatbyalteringthetempoofoneormoretracks.Studieshaveshownthatnon‐musicianlistenersfinditdifficulttomaintainasteadyrhythmintheabsenceofanexternalacousticstimulus,butthattrainedmusiciansaresignificantlybetter.ThegoalsofthisprojectweretodeterminewhetherDJsshowanadvantageinmaintainingasteadyimaginedbeat,andwhethermovementhelpsinthisregard.TenprofessionalDJs,tenage‐matchedcontrols,andtenpercussionistsparticipated.InPartI,participantsheardasequenceofalternatingdownbeatsandupbeats,wereaskedtoimaginetherhythmcontinuingforfouradditionaldownbeats,andtodetermineifatargetdownbeatfollowingthesilencewason‐timeorearly.Inoneconditionsubjectswereallowedtomovewiththerhythmwhileinanothertheyremainedstill.Theonsetofthetargetvariedfrom0to62.5%early,andpsychometricfunctionswerecreatedtodeterminedetectionthresholds.DJsshowedgreateraccuracyindeterminingwhetherthetargetbeatwasearlyoron‐timethancontrolsubjects(p=0.039),butwerenotsignificantlydifferentthanthepercussionistgroup(p>0.05).Accuracyofeachgroupwassignificantlyimpairedwhenaskedtoperformthetaskintheabsenceofmovement(allp<0.05).InPartII,event‐relatedpotentials(ERPs)wereexaminedtorealandimaginedbeatsandtotheoccasionalearlystartofabeatnewsequence.Responsestoearlybeatswerecomparedtostandarddownbeats.Preliminaryresultsindicategroupdifferences,withlargerERPresponsestoimaginedandearlybeatsforDJsandpercussionistscomparedtocontrols.Theseresultssuggestthatrhythmicexperience,whetherasaDJorpercussionist,changesthebrain.WearecurrentlytestingtheeffectsofexperienceinacontrolledwaybystudyingDJstudentsbeforeandafteranintensiveweekoftraining.52NeuraldynamicsofbeatperceptionJohnIversen*,AniriddhPatelTheNeurosciencesInstitute,SanDiego,CAUSA*=Correspondingauthor,iversen@nsi.eduOurperceptionsarejointlyshapedbyexternalstimuliandinternalinterpretation.Theperceptualexperienceofasimplerhythm,forexample,stronglydependsuponitsmetricalinterpretation(whereonehearsthebeat).Suchinterpretationcanbealteredatwill,providingamodelofthevoluntarycognitiveorganizationofperception.Whereinthebraindothebottom‐upandtop‐downinfluencesinrhythmperceptionconverge?Isitpurelyauditory,ordoesitinvolveothersystems?Tounderstandtheneuralmechanismsresponsibleforbeatperceptionandthemetricalinterpretation,wemeasuredbrainresponsesasparticipantslistenedtoarepeatingrhythmicphrase,usingmagnetoencephalography.Inseparatetrials,listeners(n=11)wereinstructedtomentallyimposedifferentmetricalorganizationsontherhythmbyhearingthedownbeatatoneofthreedifferentphasesintherhythm.Theimaginedbeatcouldcoincidewithanote,orwithasilentposition(yieldingasyncopatedrhythm).Sincethestimuluswasunchanged,observeddifferencesinbrainactivitybetweentheconditionsshouldrelatetoactiverhythminterpretation.Twoeffectsrelatedtoendogenousprocesseswereobserved:First,sound‐evokedresponseswereincreasedwhenanotecoincidedwiththeimaginedbeat.Thiseffectwasobservedinthebetarange(20‐30Hz),consistentwithearlierstudies.Second,andincontrast,inducedbetaresponsesweredecoupledfromthestimulusandinsteadtrackedthetimeoftheimaginedbeat.Theresultsdemonstratetemporallypreciserhythmicmodulationofbetaresponsesthatreflecttheactiveinterpretationofarhythm.Giventhesuggestedrolesofbetainmotorprocessingandinlong‐rangeintracorticalcoordination,itishypothesizedthatthemotorsystemmightdrivethesetemporaleffects,evenintheabsenceofovertmovement.Preliminaryindependentcomponentlocalizationanalysis(n=4)supportsthisview,consistentlyfindingbeat‐relatedactivityinmotorareas.(SupportedbyNeurosciencesResearchFoundation.)

Page 34: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3453TappingtoHear:“MovingtotheBeat”ImprovesRhythmicAcuity FionaManning*,MichaelSchutzMcMasterInstituteforMusicandtheMind,McMasterUniversity,Hamilton,Canada*=manninfc@mcmaster.caTherehasbeenincreasinglyinterestinrecentyearsininvestigatinglinksbetweenperceptionandaction(Hommer,MusslerAschersleben,&Prinz,2001).Here,weexplorethisissuethroughanovelparadigmdesignedtomeasuretheeffectof“movingtothebeat”whenlisteningtorhythmicmusic.Specifically,weexaminedtheeffectoftappingonparticipants’sensitivitytochangesinthetemporallocationofaprobetone.Indoingso,wehavedocumentedanovelinstanceofaperception‐actionlinkinanobjectivetask,usingparticipantsselectedwithoutregardformusicaltraining.Inthisexperiment,participantsheardaseriesofisochronousbeats,andwereaskedtoidentifywhetherthefinaltoneafterashortsilencewasconsistentwiththetimingoftheprecedingrhythm.Onhalfthetrials,participantstappedalongwiththepulseonanelectronicdrumpad,andonhalfthetrialstheywereaskedtolistenwithouttapping.Whentheprobetonewaslate(i.e.,aftertheexpectedbeat),performanceinthetapconditionwassignificantlybetterthanperformanceintheno‐tapcondition(an87%improvement).However,therewasnoeffectoftappingonperformancewhentheprobetonewasearlyorontime.Thisasymmetriceffectoftappingonperformanceisconsistentwithpreviousworkdocumentingatendencyfortapsthat“anticipate”thetargetbeat(reviewedinAschersleben,2002).Theeffectofmovementontheperceptionofrhythmicinformationisconsistentwithpreviousworkshowingthatmovementcanaffecttheperceptionofmetricallyambiguousstimuli(Phillips‐Silver&Trainor,2005;2007).Thereforethisdataextendspreviousresearchbydemonstratingthatmovementcanobjectivelyimprovetheperceptionofrhythmicinformation,whichsuggeststhatpartofourpropensitytomovewhilelisteningtomusicmaystemfromthismovement’sabilitytoimproveourunderstandingofitsrhythmicstructure.54Effectofmovementonthemetricalinterpretationofambiguousrhythms:Phillips­SilverandTrainor(2007)revisited.J.DevinMcAuley(1)*,MollyJ.Henry(2),PrashanthRajarajan(3),KarliNave(4)(1)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA,(2)MaxPlanckInstituteforHumanCognitiveandBrainSciences,Leipzig,Germany,(3)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA,(4)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA*=Correspondingauthor,[email protected]‐SilverandTrainor(2005,2007,2008)reportedthatmovingthebody(orhavingone’sbodymoved)insynchronywithametricallyambiguousrhythmbiasestheencodingoftherhythminamannerconsistentwiththemovementinbothinfantsandadults.Thecurrentstudyreportstheresultsofaseriesofexperimentswherewefailedtoreplicatethereportedeffectofmetricalmovementonauditoryrhythmprocessing.AllexperimentsusedthesamerhythmsandgeneralproceduredescribedinPhillips‐SilverandTrainor(2007,Experiment1).Inaninitialencodingphase,participantswere‘bounced’toanambiguousrhythmintimewithbeatsthatcorrespondedtoeitheradupleortriplemetricalinterpretation.Inasubsequenttestphase,participantslistenedtopairsofrhythms–oneaccentedinaduplefashionandtheotheraccentedinatriplefashion–andjudgedwhichofthetworhythmsbestmatchedwhattheyheardduringtheencodingphase.InconsistentwithPhillips‐SilverandTrainor(2007),noreliableeffectofbouncingmeterwasobservedonproportionsof‘duple’responses;i.e.,dupleandtriplebouncingintheencodingphasedidnotleadtodupleandtriplepreferencesinthetestphase.Thesamenulleffectofbouncingmeterwasobservedforsamplesofundergraduatestudents,ballroomdancers,andmembersofthemarchingband.Changingthetasksothatparticipantsjudgedtheirmovementintheencodingphase,ratherthanwhattheyheard,revealedthatparticipantswereabletoaccuratelyrecallthewaytheywerebounced.PossiblereasonsforthefailuretoreplicatePhillips‐SilverandTrainor(2007)willbediscussed.

Page 35: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3555TheEffectofMelodicStructureandEventDensityonPerceivedTempoEmilyCogsdillandJustinLondon*CarletonCollege,NorthfieldMNUSACorrespondingauthor:jlondon@carleton.eduMultiplecuescontributetooursenseofmusicaltempo,includingmelodicinterval,durationalpatterning(IOI),sequencecomplexity,andeventdensity,andperceivedbeatrate.Weexaminetheinteractionbetweeneventdensityandmelodicmotioninthecontextofatempojudgmenttask.Stimuliconsistedofmelodicvs.percussivestimuli,thelattercreatedbyreplacingthepitchinformationwitharepeatedwood‐blocksound.Stimulialsohadtwolevelsofsurfaceactivity(densevs.sparse).Stimuliwerepresentedatsixtempos(72,80,89,100,120,and133bpm).24Participantstappedalongattwocontrolledratesandmadetemporatingsusinga9‐pointscale.Amaineffectwasobservedfordensity,t(46)=2.40,p<.05.Nomaineffectwasfoundformelodicstructure.Inthefasttapconditionpercussivestimulielicitedfastertempojudgmentsat100and120bpm(t(190)=1.00,p<.001;t(190)=0.99,p<.05).Intheslowtapconditionasimilarpatternemergedat72and80bpm,(t(190)=0.99,p<.05;t(190)=0.97,p<.05).Nootherinteractioneffectswereobserved.Greatereventdensityconsistentlyledtofastertempojudgments;greatermelodicmovementdidnot.Indeed,melodicmotionledtoslowertempojudgmentsinsomecontexts.Theeffectofmelodicstructureonperceivedtempoisinfluencedbylisteners’self‐motion,suggestingthattheinteractionbetweenmelodicmotionandotherparametersmeritsfurtherstudy.56CanMusiciansTrackTwoDifferentBeatsSimultaneously?ÈvePoudrier(1)*&BrunoH.Repp(2)(1)YaleUniversity,NewHaven,Connecticut,USA,(2)HaskinsLaboratories,NewHaven,Connecticut,USA*=Correspondingauthor,eve.poudrier@yale.eduThesimultaneouspresenceofdifferentmetricalstructuresisnotuncommoninWesternartmusicandthemusicofvariousnon‐Westerncultures.However,itisunclearwhetheritispossibleforlistenersandperformerstocognitivelyestablishandmaintaindifferentbeatssimultaneouslywithoutintegratingthemintoasinglemetricframework.Thepresentstudyattemptedtoaddressthisissueempirically.Twosimplebutnon‐isochronousrhythms,distinguishedbywidelyseparatedpitchregistersandrepresentingdifferentmeters(2/4and6/8),werepresentedsimultaneouslyinvariousphaserelationships,andparticipants(classicallytrainedmusicians)hadtojudgewhetheraprobetonefellonthebeatinoneorbothrhythms.Inaselectiveattentioncondition,theyhadtoattendtoonerhythmandtoignoretheother,whereasinadividedattentioncondition,theyhadtoattendtoboth.Weconductedthreeexperimentsofincreasingcomplexity.InExperiment1,therhythmssharedthelowestarticulatedmetricallevelevenwhentheywereoutofphase.Participantsperformedsignificantlybetterinthedividedattentionconditionthanpredictedbythenullhypothesisthattheywouldbeabletoattendtoonlyonerhythmatatime(takingpossibleguessingintoaccount).InExperiment2,therhythmswereinterleaved,andherethenullhypothesiscouldnotbeconfidentlyrejected,althoughsomeparticipantsdidperformbetterthanpredicted.InExperiment3,therhythmswereinterleavedandchangedfromtrialtotrial.Here,performanceinthedividedattentionconditionwasclearlyatchancelevel.InExperiment1,participantsmayhavereliedonthecompositebeatpattern(thenon‐isochronouscombinedsequenceofthetwounderlyingbeats),ratherthantrackingthetwobeatsindependently,astrategythatwasdifficulttoapplyinExperiments2and3.Thus,wedonotsofarhaveclearevidencethattwodifferentbeatscanbetrackedindependently.

Page 36: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3657Spontaneousgoalinferencewithoutconcreteexternalgoals:ImplicationsfortheconceptofdanceAdenaSchachner(1)*,SusanCarey(1)(1)DepartmentofPsychology,HarvardUniversity,Cambridge,MA,USA*=Correspondingauthor,[email protected]’actionsintermsoftheirgoal—foractionswithconcreteexternalgoals(e.g.reachingforanobject;Woodward,1998).Isthistruefordanceactions?Wehypothesizedthatourmentalconceptofdanceisnotsimplyalistorcollectionofmovementfeatures,butisprimarilybasedontheactors'goal.Wealsopredictedthatactionswithnoconcreteexternalgoal(likedance)wouldleadobserverstoinferthatthegoalistoperformthemovementsthemselves.Inthreebetween‐subjectexperiments,participantssaweither(a)movementswithnoexternalgoal(animatedcharactermovinginanemptyroom);or(b)theexactsamemovements,withanexternalgoal(manipulatingobjects).Inexperiment1,weaskedparticipantswhethertheactionstheysawweredance.Thoseparticipantswhosawtheactionsperformedwithoutobjectspresentweresignificantlymorelikelytosaythattheactionsweredance.Sincethemovementsthemselveswereidenticalacrossconditions,thissuggeststhatthemovementfeatureswerenotdrivingthiscategorization.Experiments2and3directlytestedthehypothesisthatgoalinferencesdrivecategorizationofactionsasdance.Wefirstdeterminedwhatgoalparticipantsinferred,ifany.Then,weaskedparticipantshowlikelyitwasthattheactionswereactuallydance.Wefoundthatnearlyhalfofparticipantsintheno‐external‐goalconditioninferredthatthemovementsthemselveswerethegoal.Inaddition,participantswhoinferredmovement‐as‐goallaterratedtheactionsasmorelikelytobedance,exercise,orritual,evencomparedtootherparticipantswhosawtheexactsamestimuli.Thus,ourconceptofdancerestsprimarilyontheinferredgoaloftheactor,notsolelyonfeaturesofthemovement.Thesedataalsosuggestthatideaofmovementasthegoalformsanimportantpartofourconceptofdance.58LinkingperceptiontocognitioninthestatisticallearningoftonalhierarchiesDominiqueVuvan(1)*,MarkSchmuckler(1)(1)UniversityofTorontoScarborough,Toronto,Canada*=Correspondingauthor,dominique.vuvan@utoronto.caBywhatmechanismsdolistenerslearntherulesthatgoverntheirmusicalsystem?Onepotentialmechanismisstatisticallearning,referringtothehumanperceptualsystem'sexquisitesensitivitytothestatisticalpropertiesoftheenvironment.Tonalitydescribestherulesthathierarchicallyorganizemusicalpitch.Ahandfulofstudieshavedemonstratedlisteners’responsivenesstofrequencypropertiesofnoveltoneorganizations.However,noresearchhasyetlinkedlisteners'perceptionofthesepropertiestotheircognitiveprocessingofthosetonestructuresinamannerconsistentwiththecognitiveprocessingofmusicaltonality.Threeexperimentssoughttoestablishthislink.InExperiment1,listenerswereexposedtoacontinuousstreamconstructedofthree‐tonediatonic“tone‐words”(Saffran,etal.,1999).Afterexposure,listenersparsedthisstreamintoitsconstituenttone‐words,buttheirmelodymemoryperformancedidnotdifferbetweenmelodiescomposedofthetone‐wordsandmelodiescomposedrandomly.InExperiment2,listenerswereexposedtodiatonicchordsequencescomposedusinganovelharmonicgrammar(Jonaitis&Saffran,2009).Afterexposure,listenersdiscriminatedchordsequencesthatobeyedthegrammarfromthosethatdidnot,butdidnotdisplaytonalprimingeffectsforgrammarobeyingsequences.InExperiment3,20undergraduatelistenerswereexposedtoaseriesofmelodiesthatfollowedaharmonicgrammarconstructedfromtheBohlen‐Pierce(non‐diatonic)system(Loui&Wessel,2008).Afterexposure,listeners’melodymemoryperformancewasaffectedbywhetherthemelodyfollowedtheharmonicgrammarofthemelodiesduringtheexposurephase.Therefore,tonalstructuresacquiredthroughstatisticallearningcanhavecognitiveconsequences,aslongastheirconstituentunitsarenotalreadyimplicatedinotherhighly‐learnedtonalsystems(i.e.,Westerndiatonictonalhierarchies).

Page 37: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3759InattentionalDeafnessinMusic:TheRoleofExpertiseandFamiliaritySabrinaKoreimann(1)*,OliverVitouch(2)(1) Klagenfurt,Austria,(2)Klagenfurt,Austria*=Correspondingauthor,sabrina.koreimann@uni‐klu.ac.atContrarytoinattentionalblindnesseffects,phenomenaofinattentionaldeafnessarelesswell‐known.Weherepresent,tothebestofourknowledge,thefirsttestsofinattentionaldeafnessinmusicundercontrolledexperimentalconditions.Participantslistenedtoamodificationofthefirst1’50”ofRichardStrauss’ThusSpakeZarathustra,withtheexperimentalgrouphavingthetaskofcountingthenumberoftympanibeatsinthepieceandthecontrolgroupjustlistening.Ane‐guitarsoloservedastheunexpectedevent.Inourinitialstudy,experimentaldataofN=115subjects(18‐63years,M=26years,64%female)wereanalyzed.Toinvestigatetheimpactofexpertise,musicians(n=57)werecomparedwithnon‐musicians(n=58).Totestfamiliarityeffects,twofurtherexperimentalgroupswereinvestigatedwhichwerepreviouslyfamiliarizedwiththeoriginalpieceofmusicandtheprimarytask(countingtask).Inourinitialstudyresultsdemonstrateaninattentionaldeafnesseffectunderdynamicmusicalconditions(χ2[total]=18.8,p<.001,N=115).Unexpectedly,ourexaminationsshowstructurallyequivalent,althoughlessextremeresultseveninthemusiciansgroup.Theexperimentalvariationoffamiliaritywiththepieceandtheprimarytaskhadmoderateeffects,butdidnotmakeinattentionaldeafnesseffectsdisappear.Basedonthesefindings,subsequentexperimentswillaimatelucidatingopenquestionssuchasthelevelsofperceptualprocessinginvolved,andindividualandsituationalperformancepredictors.60IdentificationoffamiliarmelodiesfromrhythmorpitchaloneKathleenD.Houlihan(1)*,DanielJ.Levitin(1)(1)McGillUniversity,Montréal,Canada*=Correspondingauthor,Kathleen.houlihan@mail.mcgill.caTowhatextentarepitchandrhythmbythemselvessufficientretrievalcuesforidentifyingwell‐knownsongs?Thatis,inaRoschiancontext,howmightweevaluatetheircuevalidityforaccessingspecificexamplesofsongs?WereportnewdatafromanexperimentbasedontheseminalworkofWhite(1960)whichexaminedidentificationoftransformedmusicalinformation,andtheconsequentresearchonidentificationofsongsbasedonrhythmorpitchinformationalonebyHebert&Peretz(1997).Ouraimwastodiscovertheextenttowhichrhythmaloneorpitchalonecanbeusedforidentifyingsongs,andtoestablishasetofvalidatedmaterialsforfutureresearcherstouse.Toaccomplishthis,weexaminedtheefficacyofsongidentificationbasedonimpoverishedmusicalinformation.Specifically,weexaminedrhythminformationandpitchinformationalonecrossedwiththesub‐variablesofquantizationanddynamics.Participants(n=534)inabetween‐subjectsdesignwerepresentedwithexcerptsofwell‐knownsongswithsolelytherhythmicinformationorpitchinformationintact.Theresultsrevealedalowpercentageofcorrectidentificationoverall,andgreateridentificationinthepitchconditions;however,weobservedawiderangeofaccuracyinidentificationacrosssongs.Thisindicatesapreviouslyunreportedphenomenonthatwasmaskedinearlierreportsthatusedaveragingtechniques:certainsongs(presumablyasafunctionoftheirunderlyingcompositionalfeatures)aremorereadilyidentifiedbyrhythmaloneorpitchalonethanothers.Additionally,thepresentstudyprovidestheresearchcommunitywithausefultool,acorpusofwell‐knownsongsasubsetofwhicharereliablyidentifiablebyrhythmorpitchalone.

Page 38: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3861OurVaryingHistoriesandFuturePotential:ModelsandMapsinScience,theHumanities,andinMusicTheoryEugeneNarmour*UniversityofPennsylvania,Philadelphia,USA*=Correspondingauthor,enarmour@sas.upenn.eduPart1brieflyrecountstheinfluenceofsocialunrestandtheexplosionofknowledgeinbothpsychologyandthehumanitiesfrom1970‐1990.Asthesciencesrelyonexplicittop‐downtheoriesconnectedtobottom‐upmapsandmodels,whereasthehumanitiesbuildonbottom‐upmappingswithinmalleabletop‐down“theories”(approaches,themes,theses,programs,methods,etc.),thechangesinthesciencesduringthisperiodcontrastedsharplywiththoseinthehumanities.Bothdomainswitnessedasurgeofnewjournalsand,inthecaseofmusiccognition,theestablishmentofanewsub‐field.Inthisregard,MusicPerception,asoriginallyenvisionedbyDianaDeutsch,wascrucial.Part2discussesindetailhowthesetwosocialtransformationsaffectedthehistoriesofmusictheoryandcognitivemusictheory.Theformerfractiouslywithdrewfromitsparentorganization(AMS),whereasthelatterwaswelcomedintoSMPC.Inasmuchasbothmusictheoryandcognitivemusictheoryalsoconstructmapsandmodels,Part3,theheartofthestudy,examinesthemetatheoreticalimportanceofthesetermsformusiccognition,musictheory,andcognitivemusictheory.Parts1‐3dealwiththerecentpastandmakelittleattempttocoverthepresentstateofresearchfrom1990‐2010.Part4,however,speculatesaboutthefuture—howmusiccognition,cognitivemusictheory,andmusictheorycontributetothestructureofmusicalknowledge.Theintellectualpotentialofthisuniquetriadiccollaborationisdiscussed:psychologyprovidesacommandingtheoreticalframeworkofthehumanmind,whilemusictheoryandcognitivemusictheorymodelthemoment‐to‐momenttemporalemotionsandauditoryintellectionsatthecoreofthemusicalart.62Twenty­SixYearsofMusicPerception:TrendsinthefieldAnnaK.Tirovolas&DanielJ.LevitinMcGillUniversity,Montreal,Quebec,Canada*=Correspondingauthor,[email protected],wesoughttodocumentthelongitudinalcourseofempiricalstudies(ameta‐trendanalysis)publishedinthejournalMusicPerception,datingfromthejournal’sfirstissuepublishedin1983tothepresent.Theaimofthisprojectwastosystematicallycharacterizethenatureofempiricalresearchpublishedinoneoftheprincipalpeer‐reviewedoutletsforworkinourfield,andconsiderthesedataasasamplerepresentingtheoverallcourseofmusicperceptionresearchacrossthelastthreedecades.Specificareasexaminedwithineachpaperwere:researchtopic,thetypesofparticipantsused(includinglevelsofmusicaltraining),thenatureofthestimulipresented,materialsusedtopresentstimuli,typesofoutcomemeasuresandmeasurementapproaches,aswellasgeographicanddisciplinary(departmental)distributionoftheauthors.Intotal,384empiricalpapersinthejournalwereexamined,aswellasafullsetof574articles,andrelevantdetailsextracted.

Page 39: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:3963Pastandpresentconceptionsofmusicinthemind:AnintroductiontoErnstKurth’sMusicpsychologie(1931)DaphneTan*EastmanSchoolofMusic,Rochester,NY,USA,*=Correspondingauthor,[email protected](1886–1946)isrecognizedasaninfluentialfigureinthehistoryofmusictheory:hisanalyticalmonographs—whichdescribethemusicofBach,Wagner,andBrucknerintermsofkineticandpotentialenergyanddynamicwavesofmotion—continuetoresonatewithmusicians.ThoughsomerecentstudiesacknowledgeKurth’smusic‐cognitivecontributions,hisideasremainrelativelyunknowntoNorthAmericanscholarsengagedinempiricalresearch.Indeed,Kurth’sviewsonmusicinthemindarerarelydiscussed.ThepresenttalkprovidesanintroductiontoKurth’spsychologicaloutlookasexpressedinhislastmonograph,Musikpsychologie(1931).Inparticular,IconsiderrecentclaimsthatKurth’sapproachtomusicanticipatestheworkofmoderncognitiveresearchandmoreoverthatKurth“foundedthefieldofmusicpsychology”(Rothfarb1988).ThisdiscussionhighlightsideaspresentedinMusikpsychologiethathavereceivedsomeattentioninmodernempiricalliterature:musicaltension,implicitlearning,andcategoricalperception.ItalsosuggestshowKurth’sideasmayencouragenewexperimentalresearch,particularlypertainingtotheelasticityofperceptualcategories,contextualeffectsonpitchperception,andmusicalmetaphors.64TowardaUnifiedTheoryofMusicCognitionEugeneNarmour*UniversityofPennsylvania,Philadelphia,USA*=Correspondingauthor,[email protected],asaunity,wherepsychophysical,perceptual,cognitive,andemotionalprocessingallcometogether.Yetourmanifoldmusicalanalysesandheterogeneouscollectionsofempiricaldatabeliethis,wheremusictheoristsjugglemanydifferentandincommensuratekindsofstylistictheoriestoanalyzemusic,andwherepsychologistsinvokemanydifferentkindsofdisconnectedexperimentstoexplainhowlistenerscognizemusic.Toaddressthissurfeitoftheoriesanddata,Iidentifysometwenty‐fivemusicalparametersandshowthattheyareallscalable,whetherweordertheirelementsordinallyorbyrank,categoricallyornominally,byratios,orintervallically.Suchparametricscalingsuggestsapossiblepathtowardtheoreticalunity.Ihypothesizethatalltheseorderingsentaildegreesofsimilarityanddifferencealongwithdegreesofclosure(weakimplication)andnonclosure(strongimplication).Motionsonallthesescalesineveryparametercreatestructuresofprocessandreversal.Throughsuchscaledmovement,wecanidentifypartiallyisomorphic,analogousstructuringbetweenparameters.Wecanalsomeasurethemultipleandfrequentnoncongruencesthatareresponsibleforproducingstrongmusicalaffects.Sincebothstructureandaffectcontributetomemory,parametricinteractionsthusnecessitateanalyzingtwosimultaneousanalyticaltracks,oneforcongruentsyntacticstructuringandonefornoncongruentaffectivestructuring.Iconstructananalyticalsymbologyfortrackingtheinteractionbetweenmulti‐parametriccongruenceandnoncongruenceinordertoexplainmorepreciselytheaesthetic,temporalrelationshipbetweenstructureandaffect.Thegeneratedanalysesareminimallyreductivewithrespecttothescore(alreadyareduction).Andtheyareproducedunderoneunifiedtheory,theIR‐model.

Page 40: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4065VisualAnticipationAidsinSynchronizationTasksAaronAlbin(1)*,SangWonLee(2),ParagChordia(3)(1)GeorgiaTechCenterforMusicTechnology,Atlanta,USA,[email protected](2)GeorgiaTechCenterforMusicTechnology,Atlanta,USA(3)GeorgiaTechCenterforMusicTechnology,Atlanta,USA*=Correspondingauthor,[email protected](SMS)tovisualcues,typicallyflashinglights,hasbeenshowntohavehighermeanasynchronyandlowerupper‐ratelimit(i.e.largerIOIperiod)comparedwithauditorystimuli.However,flashinglightsmaybedifficultforSMS,and“spatialdisplacement”orbiologicalmotionmightfacilitateSMS(Repp,2006).HereweintroducesuchdynamicvisualcuesandcompareSMSinavarietyofauditory,visual,andcombinedconditions.Weconductedafingertappingsynchronizationtasktoisochronouspatterns(Radiletal.,1990)(Pateletal.,2005)(Repp&Penel,2002)(Repp&Penel,2004).IOIofthepatternwas0.5secwhichlieswithinratelimitsofbothvisualandauditorystimuli(Repp,2007).Fourteensubjects,allofwhomhadmusicalexperience,participated.Inadditiontoaflashinglight,welookedattwoanticipatorygestures,onewithaconstantvelocityandanotherwithanaccelerationcomponentinwhichasquarefallsdownontothemiddleoftheparticipant'sscreen;thesewereconductedwithandwithoutaudio.Stimuliwithconstantvelocityandaccelerationanticipationoutperformedaudioaloneorflashintermsofmeanasynchrony.Theaccelerationcomponentwithoutaudiohadtheleaststandarddeviationandthencombinedwithaudiohadtheleastabsolutemeanasynchrony.Thedifferencesofmeanasynchronyarestatisticallysignificant(p<.0001)usingamultiplecomparisontestwiththeTukey‐Kramerstatistic,withtheexceptionoftheFlashandFlash+Audioconditions.ThecurrentworksuggeststhatdisparityinSMSperformanceinvisualvs.auditorymodalitiesmaynotbeduetothemodalityitselfbutrathertothenatureofthevisualstimuli.Foranticipatoryvisualconditions,theconsistencyoftapsasmeasuredbythestandarddeviationoftaptimesissignificantlylessthanbothaudioaloneandflashingvisualcues.Theimplicationsforinternaltimingmechanismsreducingasynchronyinthedynamicvisualconditionareintriguing.66ExplainingSnowball'sDancing:aSimplerAlternativetotheVocal­learningHypothesisMarkS.Riggle*CausalAspects,LLC,CharlottesvilleVA,USA*=Correspondingauthor,markriggle@alumni.rice.eduSomeparrotsdancetomusictherebyshowingentrainmentisnotuniquetohumans.Thisobservationleadtothe'vocal‐learnershypothesis'(VLH)forentrainmentcapacity.Forbirds,sinceentrainmentisnotexpressedinthewild(meaningentrainmenthasnofitnessvalue),thattherefore,entrainmentcouldnotbedirectlyselectedbyevolution.Fortheoriginofentrainmentcapacity,theVLHhypothesisstatesthatselectionforvocal‐learningwoulddevelopstrongauditory‐motorcoupling,andthatthiscouplingsuppliesthecapacityforentrainment.Thatis,entrainmentisadirectside‐effectofselectionforvocal‐learning.Thissamereasoninglineappliedtohumanstightlyconstrainstheevolutionaryrelationshipofmusicandlanguage.Weshow,withsupportingevidence,aviablealternatehypothesisforentrainmentcapacity,andfurthermore,explainwhyonlyparrotspecies(andhumans)naturallyentrain.Thishypothesisentailstwoconditions:1)ifentrainmentcreatesaninternalpleasure,and2)ifthebrain'sneuralplasticitycanadapttothetask,thenentrainmentcanbealearnedskill.Specifically,wepostulatethataneuralcircuitexiststhatproducesaninternalreward,suchasdopamine,whenanauditorybeatoccurssynchronouslywithaperiodicvestibularjolt.Weshowevidencefromhumanbehaviorsconsistentwithboththecircuit'sexistenceandwithentrainmentasalearnedskill.Assumingasimilar,butnothomologouscircuitinbirds,wecanshowwhyparrotsspecificallymayalsolearntoentrain.Sufficientneuralplasticitytoadapttotheentrainmenttaskmaybeindicatedbythevocal‐learningability,andadditionally,duringmusic,thebird'ssocialmovementmimicingofahumanpartner'smovementmayexposethebirdtotheentrainmentpleasure.Thebirdwillthenlearntoentraintomusic.Weexaminewhyshouldonlybirdsandhumanspossestheneuralcircuitforentrainmentpleasure.Thisalternatehypothesisimpliesavastlydifferentrelationshipofmusicandlanguage.

Page 41: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4167MovementRateAffectsTempoJudgmentsforSomeListenersJustinLondon*andEmilyCogsdillCarletonCollege,NorthfieldMN,USA*=Correspondingauthor:jlondon@carleton.eduConvergingevidencefromneuroscienceandbehavioralstudiespointstoanintimatelinkbetweenrhythmperceptionandproduction.Thecurrentstudyexaminestheeffectofmotorbehaviorontheperceivedspeedanauditoryrhythm.Wepositapositivecorrelationbetweenmovementrateandperceivedtempo.Participantstappedattwocontrolledratesintwoblocksoftrials,eithereverybeat(relativelyfast)oreveryotherbeat(relativelyslow).Stimuliconsistedof(a)artificialmelodiesin2/4meter,8barslong,and(b)percussivestimuli,createdbyreplacingthepitchinformationwitharepeatedwood‐blocksound.Randomlyorderedstimuliwerepresentedatsixtempos(72,80,89,100,120,and133bpm),fourpercussiveandfourmelodicstimuliforeachtempo.Participantsmadetemporatingsusinga9‐pointscale.Forallparticipantsamaineffectoftappingmodewasonlyfoundatthe89bpmtempo,t(23)=‐2.289;p=.016(1‐tailed).However,nineparticipantsshowedsignificantresultsatalltempoconditions(averagep=.004),fourhadnearsignificantresults(averagep=.058),whileelevenwerenon‐significant(averagep=.314).Nonmusicianparticipantsgavesignificantlyslowertemporatings(p=.034)whentappingeveryotherbeat,butonlyatthe89bpmtempo.Thusmovementrateaffectstempojudgment,butonlyforsomelisteners;thereisalsoaninteractionbetweenmusicaltraining,movementrate,andperceivedtempo.Whilemorestudiesareneededtodeterminethedifferencesbetweenmovementsensitiveversusinsensitivelisteners,wetentativelyproposethattheperceivedspeedofanauditoryrhythminpartdependsonhowfastyouhavetomovetokeepupwithit.68MelodicMotionasSimulatedAction:ContinuationTappingWithTriggeredTonesPaoloAmmirante(1)*,WilliamF.Thompson(1)(1)MacquarieUniversity,Sydney,Australia*=Correspondingauthor,[email protected]‐knownthatmelodiesareperceivedtohavemotionalqualities.AccordingtoCommonCodingtheory,theseperceptualqualitiesshouldbereflectedinactions.Inaseriesofcontinuationtappingexperiments,non‐musiciansusedtheirindexfingertotapasteadybeatonasinglekey.Eachtaptriggeredasoundedtone,andsuccessivetriggeredtoneswerevariedinpitchtoformmusicalmelodies.Althoughinstructedtoignorethetones,participantsproducedsystematicdeviationsintimingandmovementvelocitythatmirroredtheimpliedvelocityofmelodicmotion.Whereunidirectionalpitchcontourandlargepitchdistancesbetweensuccessivetonesimpliedfastermelodicmotion,theinter‐tapinterval(ITI)initiatedbythejust‐triggeredtonewasshorterandthevelocityofthetapthatfollowed(TV)wasfaster;wherechangesinpitchdirectionandsmallerpitchdistancesimpliedslowermelodicmotion,longerITIandslowerTVfollowed.Thesefindingssuggestthat,duetooverlapbetweenperceptualandmotorrepresentations,participantsfailedtodisambiguatethevelocityofmelodicmotionfromfingermovementvelocity.Implicationsofthesefindingsarediscussedwithrespecttotwotopics:melodicaccentandperformanceexpression.

Page 42: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4269ThePerceptionofCadentialClosureinMozart’sKeyboardSonatasDavidSears(1)*,WilliamE.Caplin(1),StephenMcAdams(1)(1)McGillUniversity,MontrealCanada*=Correspondingauthor,[email protected]

Althoughstudiesinmusicperceptionandcognitionprovideampleevidencefortheimportanceofcadentialclosureintheexperienceoftonalmusicforbothtrainedanduntrainedlisteners,thereremainsaglaringlackofresearchonhowlistenersdifferentiateamongstthecadentialcategoriesproposedbymusictheorists,aswellasonhowvariousmusicalparameterscontributetotheperceptionofcadentialstrength.Thisstudyexplorestheunderlyingmechanismsresponsiblefortheperceptionofbothgenuinecadences(perfectauthentic,imperfectauthentic,half)andfailedcadences(deceptive,evaded)inMozart’skeyboardsonatas.Twentymusiciansandtwentynon‐musiciansheardfiftyshortexcerpts(10s)thatcontainedanequalnumberofperfectauthentic(PAC),imperfectauthentic(IAC),half(HC),deceptive(DC),andevadedcadences(EV).Eachcadentialcategorywasalsofurthersubdividedaccordingtoissuesofformallocation(PAC,HC),melodicdissonanceatthecadentialarrival(IAC),andharmonyatthecadentialarrival(EV).Forallexcerpts,performancevariableswereneutralizedsoastoconsideronlycompositionalparametersofclosure.Thusforthesestimuli,cadentialarrivalprovidedthecrucialindependentvariabledistinguishinggenuinecadencesfromnon‐cadences.Afterlisteningtoeachexcerpt,participantsratedthestrengthofcompletionofeachexcerptona7‐pointanalogical‐categoricalscale.Resultsindicatedthatmusiciansandnon‐musiciansdidnotdifferintheirratingsforgenuinecadencesbutdifferedsignificantlyintheirratingsforfailedcadences.Formallocation(PAC),melodicdissonanceatcadentialarrival(IAC),andharmonyatcadentialarrival(EV)alsosignificantlyaffectedparticipantratingsinbothgroups.Finally,aregressionanalysisindicatedthemusicalparametersthatsignificantlycontributedtoparticipantratingsofcompletionforeachcadentialcategory.

70ThePsychologicalRepresentationofMusicalIntervalsinaTwelve­ToneContextJenineBrown(1)*(1)EastmanSchoolofMusic,Rochester,NY*=Correspondingauthor,jenine.lawson@rochester.eduThisstudyinvestigateswhetherlistenersimplicitlyattunetotherepetitiveadjacentintervalpatternsfoundintwelve‐tonerows.Listeners(n=10)werefreshmenandsophomoremusicmajorsattheEastmanSchoolofMusic.Thefamiliarizationphaseconsistedofthe48versionsofatwelve‐tonerow.Inthisrow,intervals1and3occurredmostoften,followedbyintervals2and5.Interval8,forexample,neveroccurredwithintherow.Foreachrandomlyorderedtrial,listenersheardaprobe‐melodic‐interval.Ona1‐7scale,listenersratedhowidiomaticthemelodicintervalwasincomparisontowhattheyheardduringthefamiliarizationphase.Listenersratedintervalsfrom+/‐1to12semitones.Listenersratedwithin‐rowintervalssignificantlyhigherthannot‐in‐rowintervals.Moreover,listenersratedcommonwithin‐rowintervalssignificantlyhigherthanlesscommonwithin‐rowintervals.Listenerresponsescorrelatewiththe“IntervalDistribution,”whichillustratestheoccurrencesofsurfaceintervalswithinthefamiliarizationphase.Resultsdemonstratethatlistenersimplicitlyattunetorepetitiveintervalsinatwelve‐tonemusicallanguage.Moreover,theysuggestthatlistenerscreateahierarchyofmorecommonandlesscommonintervals,aninitialsteptohearingstructureintwelve‐tonemusic.Theexperimentwasduplicatedwithanewsetofparticipants(n=10),wherelistenersheardarowinthefamiliarizationphasewithadifferentintervallicstructure.Inthisrow,intervals2and5occurredmostoften.ListenerresponsesdidnotcorrelatewiththeIntervalDistribution,andwithin‐rowintervalswerenotratedhigherthannot‐in‐rowintervals.Theseresultssuggestthatimplicitlearningofintervalsinatwelve‐tonecontextonlyoccurswhenwithin‐rowintervalsareperceptuallysalient.

Page 43: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4371PitchSalienceinTonalContextsandAsymmetryofPerceivedKeyMovementRichardParncutt(1)*,CraigSapp(2)(1)CentreforSystematicMusicology,UniversityofGraz,Austria,(2)CCARH,DepartmentofMusic,StanfordUniversity*=Correspondingauthor,parncutt@uni‐graz.atThompsonandCuddy(1989)foundthatperceivedkeydistanceisgreaterformodulationstoflat‐sidekeys(inchordprogressionsbutnotindividualvoices).CuddyandThompson(1992)explainedtheasymmetrywithprobe‐toneprofiles.Flatsrelativetoakeysignaturemaybemoresalientsimplybecausetheylieatperfectfifthand/ormajorthirdintervalsbelowscalesteps(Terhardt).Thatcouldexplainwhy,relativetokeysignatures,sharpsaremorecommonthanflats.In200songswithpianoaccompaniment(DeutscherLiederschatz,1859‐1872,Vol.1,LudwigErk),in196songsinmajorkeys,1016notesaresharpenedand459flattedrelativetothestartingkey;in4minorsongs,115notesaresharpenedandnoneareflatted.In370Bachfour‐partchorales,185aremajorwith1534sharpsand465flats;139areminorwith2628sharpsand208flats;37areDorian(classifiedbyBurns,1995)with656sharpsand608flats;and9areMixolydianwith110sharpsand18flats.Totestdirectlywhetherflatsaremoreperceptuallysalientthansharps,wepresenteddiatonicprogressionsoffivechordstomusiciansandnon‐musicians.Allchordsweremajororminortriadsofoctave‐complextones.Thefirstwasthetonic;theotherswereii,IV,Vandviinmajorkeysandii,IV,vandVIinminor.Thelastfourchordswerepresentedinall24differentorders.Inhalfofalltrials,thepenultimatechordwaschangedfrommajortominororvice‐versa.Alllistenersheardalltrialsinauniquerandomorderandratedeachprogression'sunusualness.Musicianswereseparatelyaskedwhetherthelastchordcontainedanaccidental.Wepredictthatachordwithaflatwillsoundmoreunusualandthataccidentalswillbeidentifiedmoreofteniftheyareflats.72Ancientmusicandmodernears:TheperceptionanddiscriminationofNicolaVicentino’s31­tonetuningsystemMikaelaMiller(1)*,JonathanWild(1),StephenMcAdams(1)(1)CIRMMT,SchulichSchoolofMusic,McGillUniversity,Montreal,Canada*=Correspondingauthor,mikaela.miller@mail.mcgill.caNicolaVicentinoisrecognizedasasixteenth‐centurymusicalrevolutionary;awealthofscholarlyworkhasbeendevotedtohis31‐tonetuningsystemandhislistener‐orientedapproachtothetheoryandpracticeofmusic.AttemptstoanalyzeVicentino’scompositionsarelimitedinnumber,however,andempiricalstudiesoftheperceptionofhismusicarenon‐existent.ThecurrentpaperteststhehypothesisthattrainedmusicianscanhearthemicrotonalnuancesofVicentino’smusic(asVicentinoclaims),andthatcertainmusicalandacousticalparameterscanaffectlisteners’abilitytoperceivethesenuances.ThishypothesiswastestedwithapairofexperimentsinwhichhighlytrainedmusiciansfromtheMontrealareahadtodiscriminatebetweenshortexcerptedpassagesfromVicentino'svocalcompositionspresentedintheoriginal31‐toneequaltemperament(31‐TET)andinaminimallyrecomposed12‐toneequaltemperamentversion.Previousstudieshaveestimateddiscriminationthresholdsforabsolutepitchandmusicintervalmagnitudegenerallybelowthedifferencesfoundbetweenthemicrotonallyinflectedpitchesandmelodicintervalsof31‐TETandthepitchesandmelodicintervalsof12‐TET.TheresultsfromExperiment1showthatlistenerscanreliablydiscriminatebetweenthetwosystemsinmostcases,butthatharmonicandvoice‐leadingcontextscangreatlyaffectdiscriminationability.Post‐hoccomparisonsprovidedfurtherevidencethatexcerptswithsimilarvoice‐leadingpatternselicitsimilarperformanceeventhoughdifferencesinpitchheightvary.InExperiment2,thepitchesofthe12‐TETversionswereraisedbyafifthofatonetoalterthedifferencesinpitchheightbetweenthe12‐TETand31‐TETversionsoftheexcerpts.Shiftingthedifferencesinabsolutepitchheightsignificantlychangedperformanceforsomeexcerpts,butdidnotdosoforothers.Thiswasattributedpartiallytothenatureoftheexperimentaldesign,andpartiallytotheharmonicandvoice‐leadingcontextsoftheexcerpts.

Page 44: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4473ABayesianTheoryofMusicalPleasureDavidTemperleyEastmanSchoolofMusic,Rochester,NY,USA*=Correspondingauthor,dtemperley@esm.rochester.eduAnimportantpartofmusicperceptionistheidentificationofunderlyingstructuresfromthemusicalsurface—structuressuchasmeter,harmony,key,motive,andphrasestructure.Whenananalysisisfoundthatishighinprobability,thisnaturallyresultsinafeelingofrewardandpleasurefortheperceiver.FromaBayesianperspective,theprobabilityofananalysiscanberepresentedbyitsjointprobabilitywiththesurface:forexample,theprobabilityofametricalstructuregivenanotepatternisrepresentedbyitsprobabilityincombinationwiththenotepattern.Ifthisquantityishigh,andparticularlyifitrisesfairlysuddenlyfromonemomenttothenext,asenseofpleasureispredictedtoresult.Iwillarguethatthisexplainswell‐knownpleasurablephenomenainmusic,suchasareturntodiatonicharmoniesafterachromaticinterruption,orthehighlysyncopatedflourishinatablaperformanceleadinguptoastructuraldownbeat.Ofparticularinterest,underthecurrenttheory,isthephenomenonofreanalysis.Letussupposethat,insomecases,theoptimalanalysisofaneventisnotfoundinitially,butisonlybroughttothelistener'sattentionbysubsequentevents.Inthiscase,theprobabilityofstructure‐plus‐surfacefortheeventmayactuallybeincreasedinretrospect.Thisoffersanexplanationfortheeffectivenessofsomewell‐knownexpressivedevicesinmusic,suchasaugmentedsixthchords.74Key­FindingAlgorithmsforPopularMusicDavidTemperley*&TrevordeClercqEastmanSchoolofMusic,Rochester,NY,USA*=Correspondingauthor,dtemperley@esm.rochester.eduAnewcorpusofharmonicanalysesofrocksongsallowsinvestigationsintocognitiveissuespertainingtopopularmusic.Thecorpus(whichhasbeenreportedelsewhere)contains200songsfromRollingStonemagazine'slistofthe“500GreatestSongsofAllTime”;bothauthorsanalyzedall200ofthesongsinRomannumeralnotation.Thecurrentstudyfocusesontheissueofkeyinduction:whatarethecuestotonalityinrock?Aseriesofprobabilistickey‐findingalgorithmswasimplementedandtested.Inonealgorithm,adistributionofrelativerootswasgatheredfromthecorpus,showingtheproportionofchordswithrootsoneachscale‐degree(1,#1/b2,2,andsoon);wecallthisa“rootprofile.”Givenaprogressionofabsoluteroots,arootprofilecanbeusedtofindthekeythatgeneratestheprogressionwithhighestprobability;byBayesianlogic,thisisthemostprobablekeygiventheprogression.Severalvariantsofthisapproachweretried.Inonevariant,eachchordwascountedonce;inanother,eachchordwasweightedaccordingtoitsduration;inanothervariant,metricalpositionwasconsidered,givinghigherprobabilitytotonicchordsonhypermetricallystrongpositions.Thethirdalgorithmyieldedthebestperformance;thissupportsclaimsbysomerocktheoriststhatthemetricalplacementofharmoniesinrockisanimportantcuetotonality.(Wealsotriedseveralotheralgorithmsbasedonpitch‐classdistributionsratherthanrootdistributions;thesewerelesssuccessful.)Examinationofsomeofthemodel'serrorsrevealssomeotherfactorsthatmayinfluencekeyjudgmentsinpopularmusic.

Page 45: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4575NeurodynamicsandLearninginMusicalTonalityEdwardLarge*&FelixAlmonteCenterforComplexSystems&BrainSciences,FloridaAtlanticUniversity*=Correspondingauthor,[email protected],providingthebasisuponwhichmusicalstructures,suchasmelodies,areconstructedandperceived.AsZuckerkandlobserved,“…musicaltonespointtooneanother,attractandareattracted”andthesedynamicqualities“makemelodiesoutofsuccessionsoftonesandmusicofacousticalphenomena.”Thegoalofthisworkistouncoverthebasicneurodynamicprinciplesthatunderlietonalcognitionandperception.First,anewtheoryofmusicaltonalityisproposed,whichtreatsthecentralauditorypathwayasacomplexnonlineardynamicalsystem.Itpredictsthat,asnetworksofauditoryneuronsresonatetomusicalstimuli,stabilityandattractionrelationshipsdevelopamongfrequencies,andthesedynamicforcescorrespondtofeelingsofstabilityandattractionamongmusicaltones.Next,aruleforHebbiansynapticmodificationisintroduced,whichcanchangetheseresponsesinsomeways,butnotinothers.Finally,astrongpredictionaboutauditoryneurophysiologyisevaluated.Inhumans,auditorybrainstemrecordings,generatedinthemidbraininferiorcolliculus(IC),revealnonlinearpopulationresponsestocombinationsofpuretonesandtomusicalintervalscomposedofcomplextones.Itisshownthatacanonicalmodelofphase‐lockedneuraloscillationpredictscomplexnonlinearpopulationresponsestomusicalintervalsthathavebeenobservedinthebrainstemsofbothmusiciansandnonmusicians.Thisobservationprovidesstrongsupportforthetheory.Implicationsfortheexistenceofamusical‘universalgrammar’arediscussed.76ExploringMelodicFormulaicStructureUsingaConvolutionalNeuralNetworkArchitecturePanosMavromtis(1)*(1)NewYorkUniversity,NewYork,USA*=Correspondingauthor,[email protected](time‐dependent)modalenvironmentofmanyworldmusicidioms,andtypicallyreflecttheinternalizedknowledgeofexpertnativecarriers,asmanifestinoraltransmissionandimprovisation.ThispaperexplorestheformulaicsystemofmodernGreekchurchchantinaconnectionistframeworkdesignedtoemulateaHiddenMarkovModel.Replacingthediscretesymbolicrepresentationofthelatterwithaflexibledistributedoneofferssignificantimprovementoncomputationalefficiency.Afour‐layerconvolutionalfeed‐forwardneuralnetworkarchitectureisemployed.Thefirsttwolayersmapann‐graminputpatterntoahiddenstateofinnerlayeractivations;thenexttwolayersmapthehiddenstatetotheprediction,namelythenextoutputsymbol.Themodelistrainedbyback‐propagation.Theframeworkofdeeplearningisevokedtofine‐tunethenetworkbypreprocessingtheinputwithadenoisingautoencoder,trimmingdownunnecessaryconnectionsusingtheOptimalBrainDamagealgorithm.Tofacilitateinterpretation,thespaceofhiddenstatesisdiscretizedusingGaussianmixtureclustering.Byconsideringthemuchsmallersetofstate‐classesobtainedinthisway,onecanseparatetheformulaicpartsofthemelodyfromthenon‐formulaicones,andcanclassifymelodicformulasinahierarchicaltree.Thisclassificationreflectseachformula’sinternalstructure,aswellasitsfunctionwithinthephraseschemainwhichitoccurs.Theanalysiscanbeappliedtoanymelodicidiomthatischaracterizedbyformulaicstructure,suchasLatinplainchant,British‐Americanfolksong,aswellasMiddle‐EasternandIndianclassicalorfolkmusic.Theproposedformulaicanalysisandclassificationisintendedtorecoverstructuredrepresentationsofthecognitiveschematathatcharacterizeinternalizedexpertknowledgeofthemusicalidiominquestion.

Page 46: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4677RepetitionasaFactorinAestheticPreferenceElizabethMargulis(1)*(1)UniversityofArkansas,Fayetteville,AR,USA(2)*=Correspondingauthor,ehm@uark.eduEmpiricalworkinavarietyofdomainshasshownthatmultipleexposuresofastimuluscanincreasepreference.Indeed,thiseffecthasbeenshownforrepeatedexposuresofamusicalpiece.Buttheeffectofinternalrepetition–repetitionofelementswithinthepiece–hasnotbeenwellstudied.Thisexperimentexposed33participantswithoutmusicaltrainingtoshortexcerptsofsolomusicbycontemporarycomposersLucianoBerioandElliottCarter.Eachexcerptcouldbeheardinoneofthreeconditions:unmodified,modifiedsothatphrasesweremadetorepeatimmediately,andmodifiedsothatphrasesweremadetorepeatlaterinthepiece.Participantswereaskedtorateonascaleof1to7howmuchtheyenjoyedtheexcerpt,howinterestingtheyfoundit,andhowlikelyitwastohavebeenwrittenbyahumanartistratherthanrandomlygeneratedbyacomposer.Participantsfoundexcerptsthathadbeenmodifiedtocontainrepeatedphrasesmoreenjoyable,moreinteresting,andmorelikelytohavebeenwrittenbyahumanartistthanunmodifiedexcerpts.Excerptsintheimmediaterepetitionconditionwerefoundtobemoreenjoyablethanexcerptsinthedelayedrepetitioncondition,butexcerptsinthedelayedrepetitionconditionwerefoundtobemoreinterestingthanexcerptsinthedelayedrepetitioncondition,illustratinganinterestingdissociationbetweenthesetwodescriptors.Theseresultsareparticularlystrikingbecausetheoriginal,unmodifiedexcerptsallstemfromthecanonoftwentieth‐centuryartmusic,andthemodificationsmadetothemwererelativelyprimitive:audiosegmentsweresimplyextractedandreinsertedlaterinthesoundfile.Yetthissimpleactionsignificantlyimprovedtheparticipants’aestheticassessmentsoftheexcerpts.Thesefindingsprovidenewsupportfortheaestheticroleofwithin‐piecerepetitions.78EncodingandDecodingSarcasminInstrumentalMusic:AComparativestudyJosephPlazak(1,2)*(1)TheOhioStateUniversity,Columbus,Ohio,USA,(2)IllinoisWesleyanUniversity,Bloomington,Illinois,USA*=Correspondingauthor,plazak.1@osu.eduMusicisoftenregardedasbeingsimilartolanguage.Further,ithasbeenfoundtobecapableofconveyingaffectiveinformation.Inordertoinvestigatethemusicalcuesemployedbyperformersandlistenerstoencode/decodeaffectiveinformation,twoempiricalstudieswereperformed.Fourteenmusiciansrecordedvariousmusicalpassageswiththeintentionofexpressingoneoffouraffects:joy,sadness,sarcasm,andsincerity.Eachrecordingwasanalyzedandfouracousticfeatureswerefoundtobeconsistentlyemployedbyperformerstodistinguishbetweenthefourtargetaffects:1)duration,2)maximumpitch,3)noisiness,and4)numberofvoicebreaks.Inafollowupstudy,therecordingswereheardby20musicianswhoweretaskedwithidentifyingtheperformer'sintendedaffect.Theresultsrevealedthatparticipantsusedagreatervarietyofacousticcuestodecodemusicalaffect,including:duration,pitchstandarddeviation,minimumpitch,maximumpitch,noisiness,meanharmonictonoiseratio,andvarioustimbrecues.Surprisingly,sarcasmwasthebestidentifiedofthefourrecordedaffects,followedcloselybysadnessandthenjoy.Themainacousticalfeaturesusedtoencodeanddecodemusicalsarcasmwereadistinctivenoisytimbre,alargeamountofarticulationvariability,greaterpitchvariability,andsomewhatshorterdurations.Thesefindingsareconsistentwiththeliteratureontheacousticcuesofsarcasticspeech,furthersuggestingasimilaritybetweenmusicandlanguageprocessing.

Page 47: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4779TheSeriousMinorMode:ALongitudinal­AffectiveStudyDavidHuron*&KatelynHornTheOhioStateUniversity,Columbus,Ohio,USA*=Correspondingauthor,[email protected](1935)showedthatWestern‐enculturatedlistenersjudgeminor‐modepassagesassoundingsadderthantheirmajor‐modecounterparts.Lesswell‐knownisthatHevner'sstudyalsoshowedthatlistenersjudgeminor‐modeversionstobemore"serious"sounding.Usingasampleof8,117excerpts,aseriesofscore‐basedempiricalstudiesispresentedtracinglongitudinalchangesintheuseoftheminormodeinWesternartmusicbetween1700and1950.Beginningaround1760,theminormodebecameincreasinglycommon,peakinginthe1890safterwhichtheproportionofminor‐modeworkssteadilydeclined.Atthesametime,theminormodebecameincreasinglyassociatedwithlouderdynamics,fastertempos,andlargermelodicintervals(Post&Huron,2009;Ladinig&Huron,2010).Infact,minor‐modeworksinthe19thcenturyare,ingeneral,louderandfasterthanmajor‐modeworksfromthesameperiod‐‐acompletereversalofthepatternevidentinothercenturies.Theempiricalresultsreportedhereareconsistentwithlongstandinghistorical‐stylisticinterpretationsoftheriseof"Romanticism"beginningwiththe"SturmundDrang"periodoflateHaydn,wherepassion,aggression,orseriousnessbecameanincreasinglypopularmusicalexpressions(Post&Huron,2009).Atthesametime,theacousticalfeaturesareshowntobeconsistentwithanethologicalmodeldevelopedforinterpretinganimalsignals(Morton,1994).Bothsadnessandaggressionareassociatedwithlowerthannormalpitch(adefiningpropertyoftheminorscale).However,while*sadness*isassociatedwithquieter,slower,moremumbledarticulation,andsmallerpitchvariance,*seriousness*isassociatedwithlouder,faster,moreennunciatedarticulation,andlargerpitchvariance.Thatis,theuseoftheminormodeforexpressing"seriousness"appearstoexhibitsimilaracousticfeaturesforanimalexpressionsofaggression.80TheEffectofPitchExposureonSadnessandHappinessJudgments:furtherevidencefor“lower­than­normal”issadder,and“higher­than­normal”ishappierParagChordia(1)*,AvinashSastry(2)(1)GeorgiaInstituteofTechnology,Atlanta,USA(2)(1)GeorgiaInstituteofTechnology,Atlanta,USA*=Correspondingauthor,PPC@GATECH.EDUItiswidelyknownthatcertainprosodicfeaturesarecharacteristicofsadspeech.Motivatedbythis,musicresearchershavesuggestedthattheaffectiveconnotationofscalesisinpartduetodeparturesfromscalenorms.Forexample,minorscalesaresadbecausetheyareinsomesense“lower”thanthemorecommonmajorscale.Totestthishypothesis,wedesignedanexperimentinwhichsubjectswereaskedtorateacommonsetofmelodies,afterexposuretotwodifferentscalestypes,asdescribedbelow.Aweb‐basedsurveywasconductedinwhichparticipantswereaskedtojudgethehappinessandsadnessofvariousshortmusicalexcerpts(15‐20s).Atotalof544participantsresponded;theiraverageagewas37yearsand88%weremale.StimuliweremelodiesthatbelongedtooneofthreescaletypesfromNorthIndianclassicalmusic(NICM),whichdifferedinthenumberofloweredscaledegrees.WhatwetermtheLOW,MEDIUM,HIGH,scalescontained3,2,and1flatscaledegrees(RaagTodi,RaagKafi,RaagKhamajrespectively).Eachsubjectheard10exposuremelodiesand5testmelodies.Subjectsweresplitintotwogroups;thefirstgroupwasexposedtotheHIGHscalemelodies,whilethesecondgroupwasexposedtotheLOWscalemelodies.Allsubjectswerethengive5testmelodiesfromtheMEDIUMscale.Rawscoreswereconvertedtoz‐scoresonapersubjectbasis.Consistentwithourhypothesis,HIGHexposuresubjects,comparedwithLOWexposuresubjects,judgedthetestmelodiestobemoresad(0.33vs.‐0.03,p<.001)andlesshappy(‐0.18vs.0.22p<.001).AlltestswerecorrectedformultiplecomparisonusingtheTukey‐Cramerstatistic.Theseresultsareconsistentwiththehypothesisthatjudgementsofsadnessandhappinessarerelatedtodeparturesfrompitchnorms,with“lower‐than‐normal”correspondingtosadnessand“higher‐than‐normal”tohappiness.

Page 48: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4881ReducedsensitivitytoemotionalprosodyinagroupofindividualswithcongenitalamusiaWilliamFordeThompson(1)*,ManuelaMarin(2)&LaurenStewart(2)(1)DepartmentofPsychology,MacquarieUniversity,Sydney,Australia,(2)DepartmentofPsychology,Goldsmiths,UniversityofLondon,London,UK*=Correspondingauthor,bill.thompson@mq.edu.auAnimportantquestionsurroundingcongenitalamusiaiswhethertheimpairmentisrestrictedtomusicorwhetheritextendstodomainssuchasspeechprosody(toneofvoice).Musicandspeechprosodyarebothauditorysignalsthatacquireemotionalmeaningthroughchangesinattributessuchaspitch,timing,intensity,andtimbre,andsuchacousticattributesmaybehandledbysimilarmechanisms.Ifthecapacitytoprocessandinterpretacousticattributesisimpairedwhentheyoccurinmusic,itmightalsobeimpairedwhentheyoccurinspeechprosody,leadingtoreducedsensitivitytoemotionalprosody.TwelveBritishindividualswithcongenitalamusia(diagnosedbytheMBEA)and12matchedcontrolsjudgedtheemotionalconnotationof96spokenphrases.Phrasesweresemanticallyneutralbutprosodiccues(toneofvoice)communicatedeachofsixemotions:happiness,sadness,fear,irritation,tenderness,andnoemotion.Twopitchthresholdtaskswerealsoadministeredtodetermineparticipants’thresholdsfordetectionofpitchchangeanddiscriminationofpitchdirection.Classificationofemotionsconveyedbyprosodicstimuliwassignificantlylessaccurateamongtheamusicgroup(M=77.87)thanamongmatchedcontrols(M=88.19),p<.01.Theamusicgroupwassignificantlypoorerthanthenormativegroupatclassifyingprosodicstimuliintendedtoconveyhappiness,sadness,tenderness,andirritation.Theamusicgroupalsoexhibitedreduceddiscriminationofpitchdirection,butnotpitchchange.Theresultssuggestthattheimpairmentincongenitalamusiaoccursatastageofprocessingthatisrelevanttobothmusicandspeech.Resultsareconsistentwiththehypothesisthatmusicandspeechprosodydrawonoverlappingcuesforcommunicatingemotion.Thus,whenindividualsareimpairedatperceivingpitch‐relatedattributesinmusic,theyexhibitreducedabilitytodecodeemotionalconnotationsfromtoneofvoice.82TheRoleofMetricalStructureintheAcquisitionofTonalKnowledgeMatthewRosenthal(1)*,RikkaQuam(1),ErinHannon(1)(1)UniversityofNevadaLasVegas­LasVegas,Nevada­UnitedStates*=Correspondingauthor‐rosent17@unlv.nevada.eduExperiencedlistenerspossessaworkingknowledgeofpitchstructureinWesternmusic,suchasscale,key,harmony,andtonality,whichdevelopsgraduallythroughoutchildhood.Itiscommonlyassumedthattonalrepresentationsareacquiredthroughexposuretothestatisticsofmusic,butfewstudieshaveattemptedtoinvestigatepotentiallearningmechanismsdirectly.InWesterntonalmusic,tonallystablepitchesnotonlyhaveahigheroverallfrequency‐of‐occurrence,buttheymayoccurmorefrequentlyatstrongthanweakmetricalpositions,providingtwopotentialavenuesfortonallearning.Twoexperimentsemployedanartificialgrammar‐learningparadigmtoexaminetonallearningmechanisms.Duringafamiliarizationphase,weexposednonmusicianadultlistenerstoalong(whole‐tonescale)sequencewithcertaindistributionalproperties.Inasubsequenttestphaseweexaminedlisteners’learningusinggrammaticalityorprobetonejudgments.Inthegrammaticalitytask,participantsindicatedwhichoftwoshorttestsequencesconformedtothefamiliarizationsequence.Intheprobetonetask,participantsprovidedfitratingsforindividualprobetonesfollowingshort“reminder”sequences.Experiment1examinedlearningfromoverallfrequency‐of‐occurrence.Grammaticalityjudgmentsweresignificantlyabovechance(Exp.1a),andprobetoneratingswerepredictedbyfrequencyofoccurrence(Exp1b).InExperiment2wepresentedafamiliarizationsequencecontainingonesub‐setofpitchesthatoccurredmorefrequentlyonstrongthanonweakmetricalpositionsandanothersub‐setthatdidtheopposite.Overallfrequency‐of‐occurrencewasbalancedforbothsub‐sets.Grammaticalityjudgmentswereagainabovechance(Exp.2a)andprobetoneratingswerehigherforpitchesoccurringonstrongmetricalpositions(Exp.2b).Thesefindingssuggestthatinprinciple,metermayprovideaframeworkforlearningaboutthetonalprominenceofpitchesinWesternmusic.

Page 49: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:4983Memoryformusicalsequencesbeyondpitch:GrammaticalandassociativeprocessesRicAshleyNorthwesternUniversity,Evanston,Illinois,USA*=Correspondingauthor,ric.ashley@sbcglobal.netThehumancapacityformusicalmemoryhasinterestedresearchersforoveracentury.Memoryformusiccanbedetailedandconcreteevenwithoutconsciousefforttoencode.Wehypothesizethatthisisbecauseofasymbioticrelationshipbetweenmemoryandmusicalstructureandexplorethishypothesisintheexperimentsreportedhere.Thestimuliusedintheseexperimentsaredrumpatternstakenfromapublishedmultimediacorpusofaudiorecordings,videorecordings,andtranscriptions.Viewedasprobablisticgrammars,thesepatternshaveanumberofimportantfeatures:theyaresignficantlyasymmetricintheirtransitionprobabilities;theyaresparseintheirmatrixstructures;andtheyarereferential,inthatonesound(aparticularinstrumentinthedrumset)iscentrallyconnectedtoallothersonicelements.Thesegrammaticalfeaturesparallelsalientaspectsoftonalharmony,allowingustotesttherolesofthesedifferentfeatureswithregardtomusicalmemory.Participants(universitystudentswithoutsignificantmusicaltraining)heardrumsequencesoftakenfromthereal‐worldcorpora,orversionsofthesemanipulatedforsymmetry/asymmetry,hierarcy/referentiality,andsparseness/density,inastandardstaisticallearningparadigm.Theythenhearpairsofexamples,oneofwhichtheyhaveheardbeforeandoneofwhichisalure;theyidentifywhichofthetwoexampleswaspreviouslyheard.Datacollectionisongoing.Resultstodatedemonstratethatmemoryisenhancedforsequenceswithgrammarsthatareasymmetrical,referential,andsparse;manipulationofanyofthesefactorsdegradesmemoryforthesesequences.Thesefeaturesworkindependentofpitch,suggestingthattheyareprimarycognitiveelementsofperceptionbeyondthedomainofmusicalpitch.Parallelswithlinguisticstructuresareconsidered,aswellasthewayinwhichmusicalgrammarsrelatedtobasicaspectsofassociativememory(chaining,buffering,hierarchicstructure,andeffectsofcontext).84MoreThanMeetstheEar:MemoryforMelodiesIncludestheMeterSarahC.Creel(1)*(1)UCSanDiego,CognitiveScience,LaJolla,CA92093­0515*=Correspondingauthor,creel@cogsci.ucsd.eduRecentworksuggeststhatlistenershaveverydetailedmemoryforindividualpiecesofmusic,andthatthisdetailinfluencesseeminglystructuralaspectsofmusicperceptionsuchasmeter.Thecurrentstudyexamineswhethersuchdetailedmusicalknowledgemayunderpincomprehensionofdifferentmusicalstyles.Acrossmultipleexperiments,listeners,unselectedformusicaltraining,heardsixmelodieseachintwodifferentmusical“styles”whichweredistinctininstrumentation(saxophoneandharp/Frenchhornandaccordion)andmeter(34/68).Melodieswereconstructedsothat,whenplayedalone,theywereambiguousbetween34and68meters,butcontexts(accompaniments)suggestedeither34or68meter.Acrossparticipants,eachmelodywasheardinallpossiblecombinationsoftimbreandmeter.First,listenersheardeachmelodyseveraltimesinstyle‐specificcontextwhiledoingacovertask.Then,ina“probe”phase,listenersheardeachmelodywithoutitsinstrumentalcontext,followedbyaprobedrumbeatcontinuationineither34timeor68time.Previouswork(Creel,inpress)suggeststhatlistenerspreferthemetricalprobematchingtheirpreviousexperiencewiththatmelody.Thetwistsinthecurrentstudyarethatsomemelodiesweresimilartooneanother(intimbre),andthatlistenersweretestednotonlyonfamiliarmelodiesmbutalsonewmelodieswithmeter‐associatedtimbres.Acrossexperiments,listenersselectedexposure‐consistentmetricalcontinuationsforfamiliarmelodieswhenthetimbrematchedtheoriginalpresentation(p<.001),butnotwhentimbrechanged(p>.2).Listenersgeneralizedmetricalknowledgetonewmelodiesonlywhenbothtimbreandmotifcontentmatched.Thissuggeststhatlistenersencodetimbre,melody,andmeterinformationinintegratedfashion,andthatmultipleco‐varyingmusicalpropertiesmaygiverisetostyle‐specificexpectations.

Page 50: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5085TheEffectofScale­DegreeQualiaonShort­TermMemoryforPitchPanayotisMavromatis(1)*,MorwareadFarbood(1)(1)NewYorkUniversity,NewYork,USA*=Correspondingauthor,panos.mavromatis@nyu.eduWepresentanexperimentalinvestigationofhowthetonalinterpretationofapitchsequenceaffectsitsretentioninshorttermmemory.Whenapitchsequenceisperceivedinanunambiguoustonalcontext,eachpitchisqualifiedwithamentalrepresentationfortheunderlyingscaledegreequalia.Wehypothesizethatthistonalinterpretation,wheneveravailableandunambiguous,improvesthepitch’sretentioninshort‐termmemory.Twoexperimentsarepresented,employingtheexperimentalparadigmdevelopedbyDianaDeutsch,inwhichatargettoneispresentedasthefirstmemberofapitchsequencefollowedbybriefsilencefollowedbyaprobetonetobecomparedwiththetarget.Theprobewaschosentobethesame,asemitonehigher,orasemitonelowerthanthetarget.Subjectswereaskedtodeterminewhetherthefirstpitchandthefinalpitchwerethesameordifferent.Accuracyofidentificationwasconsideredtobeameasureofshort‐termretentionoftheoriginaltargettone.InExperiment1(apilot),pitchsequencesfitintothreecontexts:triadic,diatonic,andatonal.Listenersweremoreaccurateinthetriadiccontext,followedbythediatoniccontext,andleastofall,theatonalcontext.ForExperiment2,thethreeconditionswere(1)diatonic,cleartonalimplications,(2)diatonic,tonallyambiguous,(3)non‐diatonic.Registerandsetcardinalitywerefixed,therewasnopitchrepetition,andpitchesasemitonedistantfromthetargetwerenotallowedpriortothefinalprobetone.ContourandtranspositionlevelwererandomizedanddifferentscaledegreequaliaforthetargetpitchinCondition1wereutilized.Resultsindicatedthatthestrongerthetonalcontext,thegreatertheaccuracyofpitchrecall.Moreover,scaledegreequaliaappearstohaveanunconsciouseffectonpitchmemoryretentionwithinatonalcontext.86TheVerse­ChorusQuestion:HowQuicklyandWhyDoWeKnowVersesFromChorusesinPopularMusic?BenjaminAnderson(1)*,BenjaminDuane(1),RichardAshley(1)(1)NorthwesternUniversity,Evanston,IL,USA*=Correspondingauthor,ben‐anderson@northwestern.eduAnumberofrecentstudieshaveinvestigatedwhatinformationlistenerscangleanfromlessthanonesecondofmusic.Fromgenre(Gjerdingen&Perrott,2008),toemotion(Ashley,2008),tosongtitles(Krumhansl,2010),listenersgatherconsiderableinformationfromshortsounds.Thisstudyextendsthistopictotheperceptionofmusicalform,usingabriefexposureparadigmtoinvestigatehowquicklylistenerscandistinguishbetweenversesandchorusesinpopularmusic.Wealsoinvestigatewhatinformationlistenersmightusetomakethisdistinction.Werecruitedexperiencedparticipantswhoeitherknewthedifferencebetweenversesandchorusesorplayedinarockband.Theyheard300randomlyselectedexcerptsfrom30popularsongs,tenfromeachsong,withfivedifferentlengthstakenfromtheversesandchoruses.Thelengthswere100,250,500,1000,and2000ms.Thestimuliweretakenfromunfamiliartracksbyfamiliarartistsandfromfamiliartracksbyfamiliarartists.Participantswereinstructedtopushbuttonsbasedonwhetheranexceptcamefromaverseorachorus.Wehypothesizedthatparticipantswouldperformbetteronthelongersamplesandthattherewouldbeaperformancebenefitwhenparticipantsrecognizedthesong.Onthebasisof8participantstodate,usingachi‐squaredtest,performancewassignificantlyabovechance(p<0.001foralllengths)suggestingthatparticipantscandetermineversesfromchorusesevenat100ms.Usingaz‐test,performanceat100mswassignificantlyworsethanperformancefor250(p=0.01),500(p<0.001),1000(p<0.01),and2000ms(p<0.001).Whatwassurprising,however,wasthatperformancewassignificantlybetterforunfamiliarexamples(p<0.001).Inthefamiliarcondition,participantsmayhaveneededtoconsiderthespecificsongratherthansimplydeterminingwhethertheyheardaverseorachorus.

Page 51: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5187TheRelationshipBetweenMusicAptitudeandtheAbilitytoDiscriminateToneContoursintheCantoneseLanguageAliceAsakoMatsumoto*&CarolineMarcumEastmanSchoolofMusic,UniversityofRochester,Rochester,NY,USA*=Correspondingauthor,aliceasakomatsumoto@gmail.comPreviousstudieshaveindicatedthatthereisacorrelationbetweenmusicaptitudeandlexicaltonediscrimination(Copeland,2009;Dankovicováetal.,2007;Kolinskyetal.,2009).Thisstudybuildsuponthatresearchbytestingnon‐tonallanguagespeakers’abilitytoidentifylexicaltonesandcomparingtheirresultswithmusicaptitudescores.ACantoneseToneContourDiscriminationTestwasadministeredinaclassroomsettingto47participantsfromtheEastmanSchoolofMusicinRochester,NY.Participantswereundergraduatesenrolledineitheraremedialoracceleratedmusictheoryclass,basedonplacementtesting.Inthefirsttask,listenersheardanisolatedwordandidentifieditasrising,falling,orleveltone.Inthesecond,listenersheardapairofsentencesandwereaskedforasame‐differentdiscrimination.Inhalfoftheitems,thesentencesdifferedonlybythecontourofthefinalsyllable.Participantsidentifiedthecontourofthefinalsyllableofthesecondsentencebycirclingarrowsonananswersheetforthethreetypesoflexicaltone.MusicaptitudesweremeasuredbytheAdvancedMeasuresofMusicAudiation(AMMA)test(Gordon,1989).Afterthetest,participantscompletedaquestionnaireregardingbackgroundinlanguageandmusic.Resultsshowedamaineffectformusicachievement(theoryclassplacement):meansweresignificantlyhigherforstudentsintheacceleratedmusictheoryclass,comparedwiththeremedialclass,F(1,39)=9.38,p<.004.Therewasalsoamaineffectfortask,withperformanceontheisolatedwordshigherthanthesentencecontext,F(1,78)=110.57,p<.001.TherewasnosignificantcorrelationbetweentheAMMAscoreandtheabilitytodiscriminatetonecontours(Pearsoncorrelationr=.036n.s.).Thisfindingthussupportsresearchinmusiceducationthataptitudeandachievementaredistinctaspectsofmusicianship(Gordon,2007).88WhydoesCongenitalAmusiaOnlyAffectSpeechProcessinginMinorWays?EvidencefromaGroupofChineseAmusicsFangLiu(1)*,CunmeiJiang(2),WilliamFordeThompson(3),YiXu(4),YufangYang(5),LaurenStewart(6)(1)CenterfortheStudyofLanguageandInformation,StanfordUniversity,Stanford,USA,(2)MusicCollege,ShanghaiNormalUniversity,Shanghai,China,(3)DepartmentofPsychology,MacquarieUniversity,Sydney,Australia,(4)DepartmentofSpeech,HearingandPhoneticSciences,UniversityCollegeLondon,London,UK,(5)InstituteofPsychology,ChineseAcademyofSciences,Beijing,China,(6)DepartmentofPsychology,Goldsmiths,UniversityofLondon,London,UK*=Correspondingauthor,[email protected]‐developmentaldisorderofpitchprocessingthatcausessevereproblemswithmusicperceptionandproduction.Recentresearchhasindicatedthatthisdisorderalsoimpactsuponspeechprocessinginsubtlewaysforspeakersofbothtoneandnon‐tonallanguages.Thisstudyfurtherinvestigatedwhycongenitalamusiamainlymanifestsitselfinthemusicdomain,butrarelyinthelanguagedomain.ThirteenChineseamusicsandthirteenmatchedcontrolswhosenativelanguagewasMandarinChineseparticipatedinasetoftoneandintonationperceptiontasksandtwopitchthresholdtasks.ThetoneperceptiontasksinvolvedidentificationanddiscriminationofChinesewordsthatsharedthesamesegmentsbutdifferedintone.Theintonationperceptiontasksrequiredparticipantstoidentifyanddiscriminatestatementsandquestionsthatdifferedinvariousacousticcharacteristics(pitch,rhythm,andintensity)acrosstheentireutterances.Thepitchthresholdtasksinvolvedtheuseofadaptive‐tracking,forcedchoiceprocedurestodetermineparticipants’thresholdsfordetectionofpitchchangeanddiscriminationofpitchdirection.Comparedwithcontrols,amusicsshowedimpairedperformanceonworddiscriminationinbothnaturalspeechandtheirglidingtoneanalogs.Theyalsoperformedworsethancontrolsondiscriminatingglidingtonesequencesderivedfromstatementsandquestions,andshowedelevatedthresholdsforbothpitch‐changedetectionandpitch‐directiondiscrimination.However,theyperformedaswellascontrolsonwordidentification,andonstatement‐questionidentificationanddiscriminationinnaturalspeech.Overall,amusiadoesnotappeartoaffectChineseamusics’performanceontasksthatinvolvedmultipleacousticcuestocommunicativemeaning.Onlywhenthetaskscontainedmainlypitchdifferencesbetweenstimuli,whichseldomoccurineverydayspeech,didamusicsshowimpairedperformancecomparedtocontrols.Thesefindingsprovideinsightintowhyamusicsrarelyreportlanguageproblemsindailylife,andhelpunderstandingofthenon‐domain‐specificityofcongenitalamusia.

Page 52: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5289Music­LanguageCorrelationsandthe“ScotchSnap”NicholasTemperley(1)*,DavidTemperley(2)(1)SchoolofMusic,UniversityofIllinoisatUrbana­Champaign,USA,(2)EastmanSchoolofMusic,UniversityofRochester,USA*=Correspondingauthor:[email protected](SS)isarhythmicpatternfeaturingasixteenth‐noteonthebeatfollowedbyadottedeighth‐note.IthasoftenbeensaidtobecharacteristicofScottishmusic;informalobservationsuggeststhatitiscommoninEnglishmusicaswell.Wepresentamusicalcorpusanalysisshowingthat,indeed,theSSiscommoninbothScottishandEnglishsongs,butvirtuallynonexistentinGermanandItaliansongs.Weexplorepossiblelinguisticcorrelatesforthisphenomenon.OurreasoningisthatlanguagesinwhichstressedsyllablesareoftenshortmighttendtofavortheSSpattern.ThetraditionaldistinctionbetweenlongandshortvowelscorrelatespartlywiththeSSpatternacrosslanguages,butnotcompletely.(Germanallowsshortstressedvowels,buttheSSpatternisnotcommoninGermanmusic.)Wethenexaminethedurationofstressedsyllablesinfourmodernspeechcorpora:oneBritishEnglish,oneGerman,andtwoItalian.BritishEnglishshowsamuchhigherproportionofveryshortstressedsyllables(lessthan100msec)thantheothertwolanguages.FourvowelsaccountforalargeproportionofveryshortstressedsyllablesinBritishEnglish,andalsoconstitutealargeproportionofSStokensinourEnglishmusicalcorpus.ThissuggeststhattheSSpatternmayhavearisenfromattemptstomatchthenaturalrhythmofEnglishspeech.Takentogetherwithotherrecentwork,ourstudyprovidesadditionalevidencefortheinfluenceoflinguisticrhythmonmusicalrhythm.90Acomparisonofspeechvs.singinginforeignvocabularydevelopmentA.Good(1)*,J.Sullivan(2),&F.A.Russo(1)(1)DepartmentofPsychology,RyersonUniversity;(2)DepartmentofPsychology,SaintFrancisXavierUniversity*=Correspondingauthor,agood@arts.ryerson.caThecurrentstudyextendstosecondlanguagelearningthepopularnotionthatmemoryfortextcanbesupportedbysong.Inthecontextofasecondlanguageclassroom,singingcanbeintrinsicallymotivating,attentionfocusing,andsimplyenjoyableforlearnersofallages.Fornativetext,themelodicandrhythmiccontextofsongenhancesrecalloftext(Wallace,1994).However,thereislimitedevidencethatthesebenefitsextendtolearningofforeigntext.Inthisstudy,Spanish‐speakingEcuadorianchildrenlearnedanovelEnglishpassagefortwoweeks.Childreninasungconditionlearnedthepassageasasongandchildreninthespokenconditionlearnedthepassageasanoralpoem.Afterthethirdlearningsession,childrenwereaskedtoreproducethepassageinthemethodinwhichtheyweretaught(sungorspoken)whilereadingthelyricsandweretestedontheirabilitytocorrectlypronouncetheforeignwords.Afterthefourthsession,childrenweretestedontheirabilitytorecallverbatimasmanyofthewordsaspossibleandtheywereaskedtotranslate10targetwords(orterms)fromthepassageintoSpanish.Aspredicted,childreninthesungconditiondemonstratedsuperiorverbatimrecall.Inaddition,theirpronunciationofvowelsoundsandtranslationsuccesswereenhanced.Thesefindingshaveimportantimplicationsforsecondlanguageinstruction.Thepresentationwillalsoconsidermechanismssuchasdualencodingandautomaticrehearsalthatmayberesponsibleforthegainsobservedinlearningsecondlanguagethroughsong.

Page 53: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5391PlayinginaDialect:aComparisonofEnglishandAmericanVowelsandTromboneTimbresKatieCox(1,2)(1)EastmanSchoolofMusic,RochesterNY,(2)TrumanStateUniversity,KirksvilleMO*=Correspondingauthor,kac751@gmail.comWhiletherelationshipbetweenmusicandlanguagehasbeenextensivelyexploredintheareasofrhythm,melody,harmony,andcognitiveprocessing,theeasilydrawnparallelbetweenvowelsandtimbrehasbeenlargelyignored.Thisprojecthastwoaims:first,toconfirmthatthereporteddistinctionintimbredoesexistbetweenEnglishandAmericantrombonists;second,tocorrelatethatdistinctionwithrelevantdifferencesinthevowelcontentofeachlanguagegroup.Participantswereaskedtosubmitarecordingofaprescribedstandardetude;thisetudecontainedpre‐determinedtargetnoteswhichweremeasuredatformant1and2.Fromthisdata,aninventoryoftimbreswasconstructedforeachsubjectgroup.Thisinventorywascomparedtothevoweldistribution(alsomeasuredatF1andF2)foreachlanguage.Fortimbre,bothF1andF2showedatrendtobeloweramongEnglishsubjectsthanAmericans.ThiscorrelateswiththetendencyforlowbackvowelsinBritishEnglishtoclusterinalowerpositionthaninAmericanEnglish,implyingthatthesoundsnaturallyoccurringaplayer’slanguagemayimpacthispreferredmusicalsoundoptions.

Page 54: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5492ExpectationandEmotioninaLiveConcertExperimentHaukeEgermann(1)*,MarcusPearce(2),GeraintWiggins(2),StephenMcAdams(1)(1)CIRMMT,SchulichSchoolofMusic,McGillUniversity,Montreal,Canada,(2)CentreforCognition,ComputationandCultureGoldsmiths,UniversityofLondon,London,UK*=Correspondingauthor,hauke@egermann.netWeinvestigatedtheoftentheorizedroleofmusicalexpectationsininducinglistener’semotionsandpresentresultsfromalivefluteconcertexperimentwith50musicallytrainedparticipants.UsingtheCIRMMTAudienceResponseSystem,wemeasuredcontinuouslysubjectiveexperience(using50wirelessiPods)andperipheralpsychophysiologicalchanges.Toconfirmtheexistenceofthelinkbetweenexpectationandemotion,weusedathree‐foldapproach.(1)Basedonaninformation‐theoreticmodel,musicalexpectanciesarepredictedbyanalyzingthemusicalstimuliused(sixpiecesofsoloflutemusic).(2)Acontinuousexpectationratingscalewasemployedbyhalfoftheaudiencetomeasuretheunexpectednessofthemusicheard.(3)Finally,emotionsalsoweremeasuredusingamulti‐componentapproach:subjectivefeeling(ratedcontinuouslybytheotherhalfofaudiencemembers),expressivebehaviorandperipheralarousal(bothmeasuredonall50participants).Wepredictedandobservedarelationshipbetweenhigh‐information‐contentmusicalevents,theviolationofmusicalexpectations(incorrespondingratings)andcontinuouslymeasuredemotionalreactions.Thus,musicalstructuresleadingtoexpectationreactionsarealsothoughttobemanifestedinemotionalreactionsatdifferentemotioncomponentlevels.Theseresultsemphasizetheroleofmusicalstructuresinemotioninduction,leadingtoafurtherunderstandingofthefrequentlyexperiencedemotionaleffectsofmusicineverydaylife.93ASyntheticApproachtotheStudyofMusically­InducedEmotionsSylvainLeGroux(1)*&PaulF.M.J.Verschure(1,2)(1)UniversitatPompeuFabra,Barcelona,Spain,(2)ICREA,Barcelona,Spain*=Correspondingauthor,[email protected],cerebralandphysiologicalstates.Yet,therelationshipbetweenspecificmusicalparametersandemotionalresponsesisstillnotclear.Whileitisdifficulttoobtainreproducibleandindependentcontrolofmusicalparametersfromhumanperformers,computermusicsystemscangeneratefullyparameterizedmusicalmaterial.Inthisstudy,weusesuchasystem,calledtheSMuSe,togenerateasetofwell‐controlledmusicalstimuli,andanalyzetheinfluenceofparametersofmusicalstructure,performanceandtimbreonemotionalresponses.Thirteenstudents(5women,M:25.8,range:22‐31)tookpartintheexperiment.Theywereaskedtoratethreeblocksofsoundsamplesintermsoftheemotiontheyfeltona5pointsSAMscaleofvalence,arousalanddominance.Theseblockscorrespondedtochangesinthestructureparameter:3modes(Minor,Major,Random)*3registers(Bass,Tenor,Soprano);performancelevel:3tempi(Lento,Moderato,Presto)*3dynamics(Piano,MezzoForte,Forte)*3articulations(Staccato,Regular,Legato);andtimbre:3Attacktime(Short,Medium,Long)*3Brightness(Dull,Regular,Bright)*3Damping(Low,Medium,High).Foreachblock,wefollowedarepeatedmeasuredesignwheretheconditionswerepresentedinrandomorder.RepeatedmeasureMANOVAsshowedthatminorandrandommodesweremorenegative,whilesopranoregisterwasmorearousing.Staccatoarticulations,prestotempiandfortedynamicsfeltmorearousingbutalsomorenegative.Prestotempiandfortedynamicswereperceivedasmoredominant.Brightsoundswithshort‐attackandlowdampingweremorearousing.Longerattacksandbrightersoundsfeltmorenegative.Finally,brightandlowdampingsoundswereperceivedasmoredominant.Thisstudyshowsthepotentialofsyntheticmusicsystemsforanalyzingandinducingmusicalemotion.Inthefuture,interactivemusicsystemswillbehighlyrelevantfortherapeuticapplicationsbutalsoforsound‐baseddiagnosis,interactivegaming,andphysiologically‐basedmusicalinstruments.

Page 55: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5594AffectiveAnalysisUsingtheProgressiveExposureMethod:ThesecondmovementofBeethoven’sPathétiquesonata(Op.13)JoshuaAlbrecht(1)*,DavidHuron(1),ShannonMorrow(2)(1)OhioStateUniversity,Columbus,OH,USA,(2)WestminsterCollegeoftheArtsatRiderUniversity,Princeton,NJ,USA*=Correspondingauthor,[email protected],anumberofapproacheshavebeenusedtomeasurealistener’sperceptionofmusicalaffectusingcontinuousself‐reportthroughoutalisteningexperience.Whilethismethodretainssomedegreeofecologicalvalidity,therearesomesignificantmethodologicaldifficultieswithusingthisapproachandtheresultingdatacanbeverydifficulttointerpret.Inthisstudy,ourprincipalaimistofurtherextendtheapproachofmeasuringperceivedmusicalaffectthroughoutaworkbyusingaprogressiveexposuremethodinwhichlistenershearshortexcerptsofmusicwithsomemusicalcontextsurroundingeachexcerpt.Thismethodallowsthelistenertimetointrospectonaclearlydefinedexcerptbeforeresponding.Wereportresultsfromtwoexploratorystudies.Inthefirstfree‐responsestudy,5musicianlistenersprovidedescriptivetermsfor5‐secondrecordedexcerptsofBeethoven’sPathétique.Acontentanalysisonthetermsyields15affectivedimensionsappropriateforthisspecificwork.Inthesecondstudy,eachlistenerusesthesedimensionstoraterandomly‐ordered5‐secondexcerptsforthreeofthe15affectivedimensions.Theratingsforeachexcerpt,whenreassembledintemporalorder,provideadiachronicportraitoftheperceivedaffectinthepiece.Preliminaryresultsindicateahighlevelofintra‐andinter‐subjectreliability,consistentwiththeideathatWestern‐enculturatedlistenersevaluatemusicalaffectinsimilarwaysbasedonobjectivemusicalfeatures.Thesemusicalfeaturesaremeasuredforeachexcerptandamultipleregressionanalysisiscarriedouttomodelhowlistenersencodemusicalaffect.Severalrepresentativepassagesarethenexaminedindetail,illustratingtherolethataffectiveanalysiscanplayinprovidinginsighttotraditionalmusicanalysis.95Anewscaletoidentifyindividualswithstrongemotionalresponsestomusic:AbsorptioninMusicScale(AIMS)F.A.Russo(1)*&G.M.Sandstrom(2)(1)DepartmentofPsychology,RyersonUniversity,Toronto,Canada,(2)DepartmentofPsychology,UniversityofBritishColumbia*=Correspondingauthor,russo@psych.ryerson.caAnewscaletoidentifyindividualswithstrongemotionalresponsestomusic:AbsorptioninMusicScale(AIMS)Studiesofemotionalresponsestomusichaveoftenfocusedonthemusicalcharacteristicsusedtoconveyemotion(e.g.,Hevner,1936;Grewe,Nagel,Kopiez&Altenmuller,2007).Otherstudieshavefocusedonthemoderatingeffectsofculture(e.g.,Balkwill&Thompson,1999)andpreference(e.g.,Blood&Zatorre,2001;Menon&Levitin,2005).Littleworkhaslookedatindividualdifferencesthatmightaffectemotionalresponses(withtheexceptionofmusictraining,whichhasnotbeenfoundtorelatetothestrengthofemotionalresponses).Thecurrentstudyfillsthisgap,providingsupportforthenotionthatabsorptionmaybeanimportantmoderatorofemotionalresponsestomusic.WecreatedtheAbsorptioninMusicScale(AIMS),a34‐itemmeasureofindividuals’abilityandwillingnesstoallowmusictodrawthemintoanemotionalexperience.Itwasevaluatedwithasampleof166people,andexhibitsgoodpsychometricproperties.Thescaleconvergeswellwithmeasuresofsimilarconstructs,andshowsreliabilityovertime.Importantly,inatestofcriterionvalidity,emotionalresponsestomusicwerecorrelatedwiththeAIMSscalebutnotcorrelatedwithmeasuresofempathyormusictraining.Weforeseethisscalebeingusedtoselectparticipantsforstudiesinvolvingemotionaljudgments;researchersmaywishtoexcludeindividualswhoseAIMSscoresareunusuallylow.Alternately,oradditionally,researchersmaywishtousetheAIMSscoreasacovariate,toaccountforsomeofthevariabilityinemotionalresponses.Absorptioninmusichasthepotentialtobeameaningfulindividualdifferencevariableforpredictingthestrengthofemotionalresponsestomusic.

Page 56: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5696TheDevelopmentofInterpretationDuringPracticeandPublicPerformance:AcasestudyTâniaLisboa(1)*,AlexanderP.Demos(2),RogerChaffin(2),&KristenT.Begosh(2),(1)RoyalCollegeofMusic,London,UK,(2)UniversityofConnecticut,StorrsCTUSA*=Correspondingauthor,tlisboa@rcm.ac.ukHowdoestechniqueandinterpretationdevelopoverthecourseoflearninganewpiece?Overatwoyearperiod,westudiedanexperiencedcellistlearningthePreludefromJ.S.Bach’sSuiteNo.6forsolocello.Wemeasuredbar‐to‐barchangesinsound‐levelandtempofor19practiceand6liveperformances.Cross‐correlationwasusedtotrackthedevelopmentofsimilaritiesbetweenadjacentperformancesandthesimilarityofeachperformancetothefinalpublicperformance,consideredthebestbythecellist.Cross‐correlationsfortempoincreasedmonotonicallyovertime;soundlevelshowedamorecomplexpattern.Variationsintempobecameprogressivelymoreconsistentfromoneperformancetothenextaspracticeprogressed.Inaddition,weexaminedtempoandsoundlevelfluctuationsduringperformanceusinglinearmixed‐effects(growthcurve)modelingtoidentifyeffectsoftechnicalandmusicaldecisionsthatthecellistreportedmakingduringpracticeandtheperformancecues(PCs)thatshereportedusingasmentallandmarksduringperformance.Asexpected,tempofollowedanarch‐shapedfunctionbothacrosstheentirepieceandwithinphrases.PlayingslowedatexpressiveandinterpretivePCsandspeededupmoreatPCsforbowingandfingeringanddidsomoreinpolishedthaninpracticeperformances.Atlocationswheretherewerememoryhazards,playingslowedonlyinpracticeperformances.SoundlevelincreasedfrompracticetopolishedperformancesatplaceswherethecellistreportedPCsforfingeringandhandpositionaswellasattechnicaldifficulties.Overbothpolishedandpracticeperformances,soundlevelandtempodecreasedatinterpretivePCs,andsoundlevelwasmorevariableatexpressivePCs.ThissuggeststhatPCsprovidedpointsofcontrolthatallowedthecellisttointroducevariationinthehighlypreparedmotorsequencesofherperformance.97EffectsofmotorlearningonauditorymemoryformusicRachelM.Brown(1)*,CarolinePalmer(1)*(1)McGillUniversity,Montreal,Canada*=Correspondingauthors,[email protected],[email protected],butnotallpeoplecanperformthemelodiestheyknow.Motortheoriesofperceptionandrecentevidencesuggestthattheabilitytoproducesensoryoutcomescaninfluencetheperceptionofsimilaroutcomes;otherviewssuggestthatmotorlearningisnotcrucialtoperception.Threeexperimentsexaminedhowskilledperformers’abilitytorecognizemusicisinfluencedbythetypeofauditory‐motorlearningandindividualdifferencesinmentalimageryabilities.Ineachexperiment,trainedpianistslearnedshort,novelmelodiesinfourconditions:1)auditory‐only(listeningtorecordings),2)motor‐only(performingwithoutauditoryfeedback),3)strongly‐coupledauditory‐motor(performingwithnormalauditoryfeedback),4)andweakly‐coupledauditory‐motor(performingalongwithrecordings).Pianistssubsequentlylistenedtorecordingsofmelodiesandindicatedthosetheyrecognizedandcompletedauditoryandmotorimagerytests.InExperiment1,pianistspracticedeachmelodyeitherthreeorsixtimes.InExperiment2,pianistsheardandperformedalongwithacoustically‐varyingmelodies.Resultsfrombothexperimentsindicatedthatauditory‐onlylearningyieldedbetterrecognitionscoresthanmotor‐onlylearning,andstrongly‐coupledauditory‐motorlearningyieldedbetterrecognitionthanauditory‐onlylearning.Greateramountsofauditoryormotorpracticeenhancedrecognition.Imageryabilitiesmodulatedlearningeffects:auditoryimageryabilitiescorrelatedpositivelywithrecognitionfollowingmotor‐onlylearning(Experiment1),suggestingthatimageryskillscompensatedformissingfeedbackatlearning.Recognitionscorescorrelatedpositivelywithslowertempianddecreasedtimingandintensityvariationinthemelodieswithwhichpianistsperformed(weakly‐coupledlearningcondition;Experiment2).Inathirdexperiment,motorfeedbackwasenhancedatlearning.Thesefindingsdemonstratethatmotorlearningcaninfluenceauditorymemorybeyondauditorylearningalone,andthatmotorlearningeffectsaremodulatedbythestrengthofauditory‐motorcoupling,imageryabilities,andacousticvariationatlearning.

Page 57: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:5798MeasuringCognitiveThriftintheImprovisationalTechniqueofBillEvansAustinGross*Independentscholar,Lancaster,Pennsylvania*=Correspondingauthor,[email protected],HarvardscholarsMilmanParryandAlbertLordnotedthatsingersrepeatedlyusedthesamephrasestructuresagainandagain,butvariedthesemodelstofitthegivenideaofthestory.Intheirview,singerswouldcreateawaytoexpressoneidea,thenalterthewordingwithinthismodeltoexpressanewidea.ParryandLordsuggestedthatthereasonforthiswasthatasingerwouldnotsearchforanentirelynewphrasestructureifanothercouldbeadapted,becausethiswouldbeunnecessary.Theyreferredtothesephrasestructuresasformulas,andsuggestedthatthepurposebehindthisprocessofadaptationwasthrift.Ofcourse,thisprocessmayormaynotbeconscious.PsychologistJeffPressing,inhisreviewofotherworkincognitivescienceaswellashisownworkonimprovisationalbehavior,notedthatwithincreasedpracticecontrolledprocessingdevelopsintoautomaticmotorprocessing.Theseautomaticmotormovementsexistasspecificmotorprogramsthatanimproviserdrawsuponinperformance.ThispaperutilizesthesetwoareasasaframeworktointerpretanalysesofimprovisedsolosbyjazzpianistBillEvans.Otherauthors,suchasGregorySmithandBarryKenny,havenotedlocalformulasinEvans’splaying,butthesearerelativelyfixed.ThepresentworklocatesflexiblemelodicmodelsthatoccurinmultiplesolosbyEvans,butwhichareelaborateddifferently.ThesemodelsfulfillthevariabilityrequirementofParryandLord’s“formula,”existatacomparablelevelofstructure,andprovidemusicalevidencetosupportclaimsbyPressingandothersaboutthecognitiveaspectsofimprovisedbehavior.Throughtheseanalyses,thisworkprovidesmusicaldatatoexamineexpertbehaviorintheareaofmusicalimprovisation.99Howdosingerstune?JohannaDevaney(1,2)*,JonathanWild(1,2),PeterSchubert(2),IchiroFujinaga(1,2)(1)CenterforInterdisciplinaryResearchinMusicMediaandTechnology(CIRMMT),(2)SchulichSchoolofMusic,McGillUniversity,MontrealCanada,*[email protected]’intonationpracticeshavebeentheorizedsincetheRenaissance.Thereremain,however,anumberofopenquestionsabouthowconsistentlysingerstuneinperformanceandhowtheirtuningrelatestoidealizedtuningsystems,suchasJustIntonation,Pythagoreantuning,andequaltemperament.Theresultsfromthisexperimentwillbeconsideredinthecontextofotherfindingsonsingingandinstrumentalintonationpractices,aswellastheliteratureonthejustnoticeabledifferenceinpitchperception.Thisprojectconsistedoftwoexperiments.Inthefirst,groupsofsixnon‐professionalandprofessionalsingersperformedSchubert’s“AveMaria”bothacappellaandwithaccompaniment.Inthesecond,threedifferentSATBquartetsperformedasetofexercisesandPraetorious’s“EsisteinRos’entsprungen”.Intervalsizedatawasextractedfromtherecordingswithasetofautomatedmethods.Theanalysisofthisdatafocusedonthetuningofmelodicsemitonesandwholetonesandverticalm3,M3,P4,TT,P5,m6,M6,m7,andP8.Thedatacollectedfromtheserecordingsprovidesawealthofdetailedinformationaboutthesingers’self‐consistencyandtheamountofvariationbetweensingers.Overallthesingersintheexperimentstendedtowardsequaltemperament,althoughtherewasawiderangeinintervalsizefortheboththemelodicandverticalintervals.Inthesoloexperiment,therewasaneffectfortrainingamongstthetwogroups,withtheprofessionalstendingtobeclosertoequaltemperament.Intheensembleexperiment,thereemergedaslight,butsignificant,trendtowardsJustIntonationincadentialcontextsforverticalintervals.

Page 58: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:58100"Debussy’s“HommageàHaydn,”Ravel’s“Menuetsurlenomd’Haydn,”andtheProbabilisticKey­FindingModel"AndrewAziz*EastmanSchoolofMusic,UniversityofRochester;Rochester,NY,USA.*=Correspondingauthor,[email protected](2007)proposedaProbabilisticKey‐Finding(PKF)Model,which,basedonunderlyingprobabilitydistributions,can1)computethelikelihoodofacertaintonalevent2)placesuchaneventinatonalcontextbycategorizingitinaparticularkey.Whiletheprogramwasconceivedtoanalyzecommon‐practicetonalmusic,muchofthealgorithm'spowerliesinitsflexibility;bymerelychangingparameters,theprogrammaybetransformedfromonewhichanalyzestheoutputofatonalworktothatofa"hybrid"composition.ThecurrentstudyconsiderstheapplicationsofthePFKModeltoworksbyDebussyandRavel.Thistypeof"computationalanalysis,"infact,confirmsthattheworksofthesecomposersdonotconformtotraditionaltonalmodels;thisisbased,inpart,onthefactthattheunderlyingpitch‐classdistributionsvaryconsiderably.ResultsoftheoriginalPKFModelsimulationrevealthatthekeyschemaforDebussyandRavelarehighlyambiguous;namely,therearefewinstanceswhenthetraditionalmajor/minordichotomyrevealsaconfidentkey"decision."InthecaseofDebussy,however,theanalysisrevealsthat"minor"keysareconsiderablyhardertocategorize,andthusthepaperconsidersanadaptationofthemodel.Duetotheprevalenceofthetri‐toneinpost‐tonalcontexts,thepaperprescribestheacousticscale(e.g.C‐D‐E‐F#‐G‐A‐Bb)asasubstitutionforthecommon‐practice"minor"distribution;inparticular,thereisan"axis"tri‐tonewhichprovestobethemostsignificantinsuchacousticcontexts.Theprobabilitydistributionsarealteredinaveryspecificway,andtheregionsrevealedbythePBKmodelareconsiderablymoreconfident(lowerinambiguity)thaninthetraditionalsetting.Thisshowsthatprobabilisticmethodscanprovideauseful,objectivewayofanalyzingpitchorganizationintwentieth‐centurymusic.101Pitch­continuitybasedMusicSegmentationYingjiaLiu*,SisiSun,ParagChordiaCenterforMusicTechnology,GeorgiaInstituteofTechnology,Atlanta,GA,USA*=Correspondingauthor:yliu385@gatech.eduWepresentacomputationalmodelofmusicsegmentationinsymbolicpolyphonicmusic,basedonpitchcontinuity.Thisapproachisrelatedtorule‐basedandprobabilisticmethodsthatlookfordiscontinuities,typicallyinpitchandrhythm.Ahumansegmentationexperimentwasconductedtoevaluatethealgorithm.Wefindthatthecurrentalgorithmismorestronglycorrelatedwithuserresponseswhencomparedwithseveralwell‐knownalgorithms.Inourexperiment,15subjectswereaskedtoindicatephraseboundariesin5pieces,whichweresynthesizedfromasymbolicscore.Therawsegmentationtimedatawasconvertedintoadensityfunction(kerneldensityestimation)usingnon‐parametricestimationtechniques,essentiallyindicatingthesegmentationlikelihoodateachtimestep.Tomodelhumansegmentation,wefirstextractthemelodyusingasimple“skyline”approachbyusingthehighestpitches.Foreachnote,wecalculatetherelationwithitsleftneighbors(LR)andrightneighbors(RR)usinga1st‐orderpitchtransitionmatrixcomputedonthesong.Theintuitionistofindnotesthatarehighlyrelatedtopreviousnotes,butnottosubsequentnotes.Specifically,LRisthesumoftransitionprobabilitiesfromeachnoteinthetimewindowtothecurrentnote.ThesamemethodappliedinRR.Hereweuseatimewindoweighttimestheaveragenotedurationforthesong.ThevalueH=LR/(RR*(LR+RR))givesanestimateofthelikelihoodthatanoteisaphraseboundary.Toevaluatethealgorithm,thecorrelation(Pearson)betweenHandKDEiscomputed.Usingthismeasure,theproposedalgorithmoutperformslocal‐boundary,gestaltbasedandaprobabilisticmodel,asimplementedintheMIDIMATLABtoolbox.Forthefivepiecesinthistest,theaveragecorrelationwas0.31fortheproposedalgorithm,0.20forGestalt,0.13forprobability‐basedand0.12forlocalboundarydecision.Althoughthereissubstantialroomforimprovement,thisworkisconsistentwiththeideathatphraseboundariesareperceivedwhenlocalentropyincreases.

Page 59: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:59102TheDynamicImplicationsofSimpleandRecursivePatternsinMusicAndieSigler(1,2),EliotHandelman(1)(1)CentreforInterdisciplinaryResearchinMusicMediaandTechnology,Montreal,Canada,(2)SchoolofComputerScience,McGillUniversity,Montreal,Canada*=Correspondingauthor,andrea.sigler@Mail.mcgill.caAcurrentresearchtrendplaceslistenerexpectationsatthecenterofmusicperception,cognition,andemotion.Hypothesesastotheprovenanceoftheseexpectationsincludestatisticallearningofnote‐to‐noteorinterval‐to‐intervalfrequencies,andthelearningofschemataatthelocalandphrasallevels.Weproposeathirdoriginforlistenerexpectations:thedynamicimplicationsofsimpleandrecursivepatternsinherentinmusic.Weofferamathematicallyelegant,stylisticallyneutralapproachtotheanalysisofpattern(configurationsaffordingruleinference)inmelodicshape.Ourtheoryisbasedontheperceptualsalienceofsimplestructuresandtheirnaturalrecursivecombinationintomorecomplexstructuresthatremainuniformandpredictable.Thenotionofexpectancyviolationisrealizedwhenthepatternsthathavebeensetinmotioninaparticularmelodyarebroken.Weproposethattheeffectofthepattern‐breaking‘surprise’inmusiccanberelatedtowhatmighthavehappenedhadthepatterncontinued.Ourapproachgeneratesasetoftestablehypothesesaboutlistenerexpectationsgeneratedbymusicalpatternandtheeffectsoftheirrealizationorviolation.Forexample,wehypothesizethatlongerandsimplerpatterns,aswellasthosecoordinatingseveralmusicalparameters,producestrongerexpectations,andthereforetheirpatternbreaksaremoresalient.103Somenewdata,suggestionsandimplicationsregardignkeyfindingasacognitivetaskArtSamplaski*Independentscholar,Ithaca,NY,USA*=Cprrespondingauthor,[email protected]‐weighteddistributionsofscaledegrees(SDs)andcountsofSDpairsfromalargesampleofdiatonicfuguesubjectsandmelodyincipitsfromScarlatti,Bach,andMozartkeyboardmovements(N=487)wereexaminedtoinvestigatepossibilitiesforreconcilingKrumhansl’s(1990)tonalhierarchy(TH)andBrown&Butler’s(1989)intervallicrivalry(IR)competingmodelsfortonicinference.Multiplecompositionalstrategiesapparentlywereemployed,whoseSDdistributionsvariouslyaccordedwithTH,IR,orseeminglyneither;whenaggregatedbymode,distributionsaccordedwithTH.SemitonefrequencieswereconsistentwithIR;tritonesalmostneveroccurred.Whilebothinflectionsof6and7inminorwereconsidereddiatonic,LaandTealmostneveroccurred,i.e.,thesemelodiesuse“harmonic”minor.DistributionsininitialmeasuresoverweightedDo+Sol,andunderweightedRe‐Fa.ThesesuggesttheTH/IR“disconnect”isamiragecausedby:1)examiningcompositionsdifferently(broadly,melodyvs.melody+accompaniment),and2)inadvertentlyconflatingmultiplepopulations.Severaltacticsappeartoidentifytonicimmediately,e.g.,Scarlattioftenbeginscompositionsbyasolotonicfollowedbyanothersomeoctaveshigher.Whetherthesereflectlow‐levelGestaltprocessesorhighercultural‐specificknowledgeisunclear.Potentialproblemsformusiccognitionresearchduetopossibleconflationofseverallevelsofauditorycognitionarediscussedbriefly.

Page 60: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:60104ImpactofmusicaldemandsonmeasuresofeffortandtensionduringtrumpetperformanceJonathanKruger(1,2),JamesMcLean(2),&MarkKruger(3)*(1)RochesterInstituteofTechnology,NewYork,USA,(2)SUNY­Geneseo,NewYork,USA,(3)GustavusAdolphusCollege,SaintPeter,Minnesota,USA*=Correspondingauthor,mgk@gustavus.eduWhatistherelationshipbetweenthedifficultyofamusicalpassageandmeasuresofeffortortensioninthebody?Doestheanticipateddifficulty,dynamic,ortempoofamusicalpassageincreasegeneralizedactivationinmusclesnotdirectlyinvolvedinperformance?Howdoprofessionalsdifferfromstudents?Researchonmuscletensioninbrassperformancehasfocussedprimarilyontheembouchureandface(WhiteandBasmajiian,1973;HauserandMcNitt‐Gray,1991;Lapatki,Stegeman,&Jonas,2002).Henderson(1979)demonstratedthatatrumpetplayer’sthroattensionchangeswithpitch.Kruger,Mclean,andKruger(2009)foundabilitypredictslessoverallunnecessarymuscletension,butwhethermusicaldemandsincreasegeneraleffortandtensionislargelyunknown.Effectsofextraneousbodyandthroattensiononairsupportarealsounknown.NinestudentandthreeprofessionaltrumpetersplayedaconcertBbscale,arpeggiosdifferinginarticulationanddynamics,andexcerptsfromtheHaydnTrumpetConcerto.Extraneousshoulderandbacktensionwasmeasuredelectromyographically.Measureswerealsomadeofexpansionandcontractionoftheupperchestandabdomen,airpressureinsidetheperformersmouth,andmouthpiecepressureontheembouchure.Tensioninthethroatwasassessedbyexaminingclosureoftheglottisinthethroatwithanelectroglottograph.Weexpectourdatatoreplicateourearlierfindingthatsuccessfulstudentperformersdemonstratedselectivecontrolofextraneousmuscleactivityandcreatedthehighestlevelsofairpressureattheembouchure.SelectivecontrolwasmostapparentinhighrangesofthearpeggiosandduringdifficultpassagesoftheHaydnconcerto.Thecurrentdatasetwillallowustocomparestudentsandprofessionalsandtolookatthroattensioninadditiontoshoulderandbacktension.Wewillalsobeabletolooktoseewhentensioninterfereswithperformance.Weareintheprocessoffurtheranalyzingthisdata.105MusicalImprovisationinIndianandWesternSingersMeaganCurtis(1,3)*,ShantalaHegde(2),andJamshedBharucha(3)(1)PurchaseCollege,SUNY,Purchase,NY,USA,(2)NationalInstituteofMentalHealthandNeuroSciences,Bangalore,India,(3)TuftsUniversity,Medford,MA,USA*=Correspondingauthor,meagan.curtis@purchase.eduMusicalacculturationhasbeenobservedtoinfluencetheperceptionofmusic,shiftingmusicalexpectationssoastomatchthetonalrulesofone’sculture.Weextendthisresearchbyexaminingtheextenttowhichacculturationinfluencesmusicalimprovisation.Weexaminedthemusicalimprovisationsof20highlytrainedsingers,10fromIndiaand10fromtheUnitedStates.Weaskedthesingerstolistentorecordingsof48short,vocalizedmelodiesandsinganimprovisedcontinuationofeachmelody.HalfofthestimuliweretonallyconsistentwiththeIndianraagBhairavandhalfwereconsistentwiththeWesternmajormode.EachstimuluswassungbyatrainedIndiansingerandalsobyatrainedWesternsinger,soastocontroltheculturalcuesinthestimuli,suchasculturaldifferencesinstylizationandvocaltechnique.Weexaminedthepitchclassdistributionsoftheparticipants’improvisedvocalizations.Manyoftheparticipantsinbothcultureswereabletoadjusttheirvocalizationstomatchthetonalityofeachstimuluscondition,butdifferencesalsoemergedinthepitchclassdistributionsforeachculture.Thesesimilaritiesanddifferenceswillbediscussed.

Page 61: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:61106TheEmotionalPiano:ExploringtheUseofPitchHeightandArticulationRateinMajorandMinorKeysMatthewPoon*,MichaelSchutzMcMasterInstituteforMusicandtheMind,McMasterUniversity,Hamilton,Canada*=Correspondingauthor,matthew.poon@rogers.comMusicandspeechareknowntocommunicateemotionsusinganumberofacousticcues,specificallyarticulationrate(e.g.syllables‐per‐secondinspeech)andfundamentalfrequency.Basedonthepremisethathappinessisgenerallyassociatedwithmajorkeysandsadnesswithminorkeys(Hevner,1935),thisprojectexaminescomposers’useofarticulationrateandpitchheightinthefirstbookoftheWell‐TemperedClavierbyJ.S.Bach(boththepreludesandfugues);andChopin’sPreludes(Op.28)toconveyhappinessandsadnessinpianomusic.Analyzingthefirsteightmeasuresofeachpieceyielded192individualdatapointsper‐corpus,foratotalof576datapointsforeachofthetwoacousticcuesofinterest.Ouranalysisrevealedtwopointsofinterest.First,themajor‐keypieceswere,onaverage,amajorsecondhigherthantheminor‐keypieces.Second,thearticulationrate(asmeasuredinnotes‐per‐second)ofthemajor‐keypieceswas,onaverage28percentfasterthantheminorkeypieces(Major=5.86,minor=4.56).Thisshowsthat,withinourcorpus,musicthatisnominallyconsideredtobehappyisboth“higher”and“faster”thanmusicconsideredtobesad.Thisfindingisconsistentwiththeacousticcuesobservedinspeech,wherehappinessisconveyedthroughtheuseofhigherpitchheightandfasterrateofarticulation.107Singingwithyourself:Assessingtheinfluenceofself­similarityandprototypicalityinvocalimitationPeterQ.Pfordresher(1),*AlexTilton(1),JamesT.Mantell(1),&StevenBrown(2)(1)UniversityatBuffalo,StateUniversityofNewYork,USA(2)McMasterUniversity,Hamilton,Ontario,Canada*=Correspondingauthor,pqp@buffalo.eduReplicatingamelodybysingingitisaformidableimitationtaskinthatonemustreproducelaryngealgesturesthatareusuallyunobservable.Aminorityofindividualsexhibitpoor‐pitchsinging,ageneraltendencytomistunewhileimitatingamelody,andpreviousevidencesuggeststhatthesedeficitsmayreflectthedifficultyofplanningactionsbasedonatargetoutcome(aversionoftheinversemodelingproblem).Wereportthreeexperimentsthatexploredtwofactorsthatmayinfluenceimitationaccuracy:Self‐similarityandprototypicality.Participantsheardandthenvocallyimitated4‐notemelodiesbasedonimitativetargetrecordingsthatwereeitherrecordingsofthemselves(fromearlierinthesession)orofanothersinger.Recordingsofother‐singertargetswereselectedfromadatabaseandrepresentedawiderangeofaccuracies,includingaccurate(prototypicalwithrespecttomusicalstructure)andpoor‐pitch(non‐prototypical)singers.Resultssuggestedanoveralladvantageforself‐targetsasopposedtoothers(aneffectofself‐similarity).Theself‐advantageremainedinanexperimentthatreducedself‐recognitionpresentingonlypitch‐timeinformationfromtherecording(i.e.,timbrecueswereeliminated).Amongimitationsofothertargets,therewasanadvantageforimitatingmoreprototypical(accurate)ratherthanlessprototypical(inaccurate)targets.Theseeffectswerefoundfortheimitationsofparticipantswhoweregoodandpoor‐pitchsingers,thoughtheself‐advantagewasmagnifiedforsingerswhowerelessaccurateoverall.Theseresultsargueforaself‐similarityadvantage,possiblyduetoenhancedperception‐actionassociationsforone’sownvocalgestures,andasecondaryadvantageforprototypicalityinvocalimitation.Furthermore,theseresultssupporttheviewthatpoor‐pitchsingingisbasedonweakassociationsspecifictotranslatingperceptualcontentstoactions.

Page 62: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:62

Titlesandauthorsforpostersession1(Friday)Arrangedalphabeticallybylastnameoffirstauthor

A.1TheEffectofMusicalExpectationinBackgroundMusiconShort­TermPhonologicalMemoryElizabethAguila,RichardRandallA.2Tempo­basedmusictreatmentforinducingrhythmicentrainment,systemicpacing,andredirectionofrepetitivebehaviorsforchildrenoftheautismspectrum

DoritaSBerger,PhD,MatthewGoodwin,DaniellaAube,ElizaLane,BrittanyLCroleyA.3Effectsofmusicalcontextontheperceptionofpitchdifferencesbetweentonesofsameanddifferenttimbre.ElizabethM.O.Borchert,AndrewJ.OxenhamA.4TheEffectofVoiceNumerosityonPerceivedMusicalLonelinessYuriBroze,KateGuarna,BrandonPaulA.5TheRightPlaceattheRightTime:AnAnalysisofSpectralandTemporalPitchMechanismsUsingEvent­RelatedPotentialsandSourceLocalizationBlakeButler,LaurelTrainorA.6EyeblinksasanIndexofAttentionalChunkinginMusicListeningMatthewCampbell,NiallKlynA.7Theneurochemicalbasisformusicalregulationofemotion,cognition,andhealth:AreviewMonaLisaChanda,DanielJ.LevitinA.8SemanticPrimingEffectsintheCross­communicationofEmotionalConceptsbyMusicandLanguageJacobMorgan,TseeLengChoy,&JohnConnollyA.9ExplicitandImplicitKnowledgeofRockHarmonyinNonmusiciansA.10ApplyingPrinciplesofMusicSceneAnalysistoSimultaneousAuditoryWarningSignalsMatthewDavisA.11Anewmodelofperceivedinformationcontentinmelodicandnon­melodiclinesBenDuaneA.12Neuralprocessingofdissonancedistanceinmelodiesasrevealedbymagnetoencephalography(MEG)RogerDumas,ArthurLeuthold,&ApostolosGeorgopoulosA.13DoesMusicInduceorJustRepresentEmotions?TheRoleofEpisodicMemoriesinEmotionalResponsestoMusicLauraEdelman,PatriciaHelm,AlanBass,LauraBrehm,MelissaKatz,&MelissaWolpowA.14MusicalFan’StereotypesActivationandMentalPerformanceMarekFranek,RomanMlejnek,JanPetruzalekA.15WhenPushComestoShove:RhythmicBehaviorinImprovisedWalkingBassOferGazit,EricBattenberg,DavidWesselA.16DirectionalAsymmetryinTonalSpaceandtheDramaticUseofbII:TheoreticalUnderpinnings,EmpiricalEvidence,andMusicalApplicationsBrunoGingrasA.17TheEffectofOrchestrationChangesonContinuousResponsesofEmotionalIntensityMeghanGoodchild,JonathanCrellin,JonathanWild,StephenMcAdamsA.18WhatmusicaldistinctivenesssaysabouttheorganizationofcompositionalmemoryEliotHandelman

Page 63: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:63A.19AnAcousticModelforChordVoicingsinPost­TonalMusicRobertHasegawaA.20BraincorrelatesofhappyandsadmusicalemotionsusingragasofHindustaniclassicalmusic:AnelectrophysiologicalstudyShantalaHegde,ShobiniLRaoA.21Influenceofpracticedinstrumentsontheautomaticencodingofpolyphonicmelodies.C.Marie,L.Herrington,&L.J.TrainorA.22TheInfluenceofAbsolutePitchtoThree­DimensionMentalRotationAndRelatedProcessingCharacteristicsJian‐chengHou,QiDong,Qing‐huaHe,Chun‐huiChen,HeLi,Chuan‐shengChen,GuiXueA.23TheInfluenceofAbsolutePitchtoToneLanguageWorkingMemoryandRelatedProcessingCharacteristicsJian‐chengHou,QiDong,Qing‐huaHe,Chun‐huiChen,HeLi,Chuan‐shengChen,GuiXueA.24VocalRangeNormalizationanditsRoleinthePerceptionofEmotioninDifferentVoiceTypesRandolphJohnson,ElizabethLagerstromA.25Investigatinglisteners’preferenceandbrainresponsesofmultichannel­reproducedpianomusicSungyoungKim,TomaszM.RutkowskiA.26ThePerceptionofNon­chordTonesvs.UnexpectedChordTonesinTonalMelodies:InfluenceofMelodicContextonImpliedHarmonyPerceptionJungNyoKimA.27ACriticalExaminationoftheTheoryofTonalHierarchyandArgumentsforaNewTheoreticalFrameworkforExplainingTonalityPerceptionJiChulKimA.28PitchandEyebrowHeight;aTransculturalPhenomenon?NiallKlyn,MatthewCampbellA.29Functionalneuroimagingofmusicalemotions:areviewandmeta­analysisThomasKraynakA.30EffectsofContourChangeonMemoryEncodingforMinuets:AnERPStudyShannonL.Layman,RamiroR.Lopez,W.JayDowlingA.31RelationshipbetweenbasicauditoryabilitiesandperformanceontheMBEAJ.DevinMcAuley,ElizabethWielandA.32NeuralmimicryduringperceptionofemotionalsongLucyMcGarry,LisaChan,andFrankRussoA.33InvestigatingtheRoleofMusicalExpertiseinPhoneticAnalogiesofGuitarTimbreAudreyMorin,NathalieGosselin,CarolineTraubeA.34HarmonicFunctionfromVoice­Leading:ACorpusStudyIanQuinn,PanayotisMavromatisA.35ComposingbySelection:CanNonmusiciansCreateEmotionalMusic?LenaQuinto,WilliamFordeThompson,AlexChilversA.36DeconstructingEvolution'sSexualSelectionShowsMusicCouldAriseWithoutBecomingSexDimorphic:MusicisNotaFitnessIndicatorMarkS.RiggleA.37MusicalTraining,WorkingMemory,andForeignLanguageLearning

Page 64: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:64MatthewSchulkindandLauraHymanA.38SurveyingtheTemporalStructureofSoundsUsedinMusicPerceptionResearchMichaelSchutz,JonathanVaisbergA.39ContextDependentPitchPerceptioninConsonantandDissonantHarmonicIntervalsGeorgeSerorIII,JeremyGold,W.TrammellNeillA.40PrimitiveHierarchicalProcessesandtheStructureofaSingleNoteDavidSmeyA.41TheIneffabilityofModernArtMusicCeciliaTaherA.42MusicandthePhonologicalLoopLindseyThompson,MargieYankeelovA.43TheEffectofTrainingonMelodyRecognitionNareshN.Vempala,FrankA.Russo,LucyMcGarry

Page 65: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:65

Titlesandauthorsforpostersession2(Saturday)Arrangedalphabeticallybylastnameoffirstauthor

B.1Doesthechangeofamelody’smeteraffecttonalpatternperception?StefanieAcevedo,DavidTemperley,&PeterQ.PfordresherB.2TheMelodyofEmotionsMichelBelyk,StevenBrownB.3Expressioninromanticpianomusic:CriteraforchoiceofscoreeventsforemphasisEricaBisesi,RichardParncuttB.4MelodiesandLyrics:InterferenceDuetoAutomaticActivationJackBirchfieldB.5MusicalExpertiseandthePlanningofExpressionDuringPerformanceLauraBishop,FreyaBailes,RogerT.DeanB.6Perceptualgrouping:TheinfluenceofauditoryexperienceKeturahBixby,JoyceMcDonough,BetsyMarvinB.7SongStyleandtheAcousticVowelSpaceofSingingEvanD.BradleyB.8Orff­SchulwerkapproachandflowindicatorsinMusicEducationcontext:ApreliminarystudyinPortugalJoãoCristiano&R.CunhaB.9MovementduringPerformance:AHuntforMusicalStructureinPosturalSwayAlexanderP.Demos,TillFrank,TopherLoganB.10DevelopingaTestofYoungChildren’sRhythmandMetreProcessingSkillsKathleenM.Einarson,LaurelJ.TrainorB.11 Effects of musical training on speech understanding in noiseJeremyFederman,ToddRickettsB.12Differentiatingpeoplebytheirvoices:Infants’perceptionofvoicesfromtheirowncultureandaforeignspecies RaynaH.Friendly,DrewRendall,LaurelJ.Trainor(1,3)

B.13Signsofinfants'participatory­andmusicalbehaviorduringinfant­parentmusicclassesHelgaRutGudmundsdottirB.14TheEffectofAmplitudeEnvelopeonanAudio­VisualTemporalOrderJudgmentTaskJanetKim,MichaelSchutzB.15MotionCaptureStudyofGestural­SonicObjectsMariuszKozak,KristianNymoen,RolfIngeGodøyB.16InteractiveComputerSimulationandPerceptualTrainingforUnconventionalEmergentForm­bearingQualitiesinMusicbyLigeti,Carter,andOthersJoshuaB.MailmanB.17AutomaticImitationofPitchinSpeechbutnotSongJamesMantell,PeterPfordresher,BrianSchafheimerB.18SequenceContextAffectsMemoryRetrievalinMusicPerformanceBrianMathias,MaxwellF.Anderson,CarolinePalmer,PeterQ.Pfordresher

Page 66: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:66B.19Developingawindowoninfants’structureextractionJenniferK.Mendoza,LouAnnGerken,DareBaldwinB.20TheEffectofVisualStimulionMusicPerceptionJordanMoore,ChristopherBartletteB.21AnExperimentonMusicTempoChangeinDupleandTripleMeterYueOuyangB.22Listener­definedRhythmicTimingDeviationsinDrumSetPatternsBrandonPaul,YuriBroze,JoePlazakB.23TheEffectsofAlteredAuditoryFeedbackonSpeechandMusicProduction.TimA.Pruitt,PeterPfordresher,B.24DoesNoteSpacingPlayAnyRoleinMusicReading?BrunoH.Repp,KeturahBixby,EvanZhaoB.25BayesianmodellingoftimeintervalperceptionKen‐ichiSawai,YoshiyukiSato,KazuyukiAihara(3,1)B.26LinguisticInfluencesonRhythmicPreferenceintheMusicofBartokAndrewSnow,HeatherChanB.27InfantsprefersingersoffamiliarsongsGayeSoleyandElizabethSpelkeB.28Learningtosinganewsong:EffectsofnativeEnglishorChineselanguageonlearninganunfamiliartonalmelodyhavingEnglishorChineselyricsLeahC.Stevenson,Bing‐YiPan,JonathanLane,&AnnabelJ.CohenB.29ExploringReal­timeAdjustmentstoChangesinAcousticConditionsinArtisticPianoPerformanceVictoriaTzotzkovaB.30Theroleofcontinuousmotioninaudio­visualintegrationJonathanVaisberg,MichaelSchutzB.31TheEffectofRhythmicDistortiononMelodyRecognitionDavidWeigl,CatherineGuastavino,DanielJ.LevitinB.32Perceptionofentrainmentinapes(panpaniscus)PhilipWingfield,PatriciaGrayB.33TransferEffectsintheVocalImitationofSpeechandSongMatthewG.Wisniewski,JamesT.Mantell,PeterQ.PfordresherB.34Thesinglevoiceinthechoralvoice:HowthesingersinachoircooperatemusicallySverkerZadig

Page 67: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:67

Titlesandabstractsforpostersession1(Friday)Arrangedalphabeticallybylastnameoffirstauthor

A.1TheEffectofMusicalExpectationinBackgroundMusiconShort­TermPhonologicalMemoryElizabethAguila(1)*,RichardRandall(1)(1)CarnegieMellonUniversity1,Pittsburgh,U.S*=Correspondingauthor,elizabeth.aguila5@gmail.comThisstudyexaminestheeffectofmusicalexpectationonshort‐termphonologicalmemory.Workingfromthepremisethattheprocessingofdiatonicmelodieswithexpectationviolationsrequiresgreatercognitiveresourcesthantheprocessingofmelodieswithnoviolations,thisexperimentaskedsubjectstoperformasimplememorytaskwhilelisteningtoamelodyinthebackground.Ifamelodyviolatesanexpectation,wehypothesizedthattheincreasedcognitivedemandwoulddrawresourcesawayfromtheforegroundmemorytaskandnegativelyeffectshort‐termphonologicalmemory.Specifically,weisolatethemelodicexpectationsofpitchproximityandpost‐skipreversal(Huron,2006).Pitchproximityiswhenlistenersexpectafollowingpitchtobenearacurrentpitch.Post‐skipreversaliswhenlistenersexpectalargeintervaltobefollowedbyachangeindirection.Foreachtrial,subjectswereshownaseriesofsevenrandomnumbersrangingfrom1to7foronesecondforeachnumber.Thenumberswerepresentedwithabackgrounddiatonicmelodyof16notes(majormodeandbeginningandendingonscale‐degree1).Themelodiesusedincludedacontrolmelody,threetypesofbrokenpitch‐proximitymelodies,twotypesofpost‐skipreversalviolationmelodies,andtwopost‐skipreversalluremelodies.Thebackgroundmelodiesplayedwererandomizedwithintheentireexperiment.Followingthepresentednumberseries,thesubjectswereaskedtorememberthenumbersintheexactordertheywerepresented.Afterallthetaskswerecompleted,subjectswereaskedtocompleteasurvey.Therewereatotalof28subjects,including13malesand15females.Althoughtherewasnosignificanteffectofmusicconditiononpercenterrorofthememorytask,therewasasignificanteffectofserialposition.Also,thereweresignificantinteractionsbetweenonemelodywithabrokenpitchproximityexpectationandthetwopost‐skipreversalluremelodies.A.2Tempo­basedmusictreatmentforinducingrhythmicentrainment,systemicpacing,andredirectionofrepetitivebehaviorsforchildrenoftheautismspectrumDoritaSBerger,PhD(1)*,MatthewGoodwin(2),DaniellaAube(3),ElizaLane(3),BrittanyLCroley(4)

(1)MusicTherapyClinic,Norwalk,CT;(2),ConsultanttoProject;GrodenCenter,Providence,RI;(3),ResearchAssistants;GrodenCenter,Providence,RI;(4)NYUMusicTherapy,,NewYork,NY*=Correspondingauthor,dsberger@mags.netManybehaviorsinchildrenontheAutismspectrumresemblefight‐or‐flightavoidanceresponsesasaresultofhabitualstatesoffear,possiblyinducedbysensoryintegrationissuescausingon‐goingstressandderegulationofsystemicpacing.Structuredtempo‐basedrhythminterventionsat60‐beatsperminute,designedforentrainingsystemicregulationinautismcanservetoinducesystemicpacing,reductionorredirectionofrepetitivebehaviors,yieldingfocus,calm,attention,andlearninginpersonsontheAutismspectrum.Aneight‐weekpilotstudyinvestigatedwhether(andhow)theroleoftempoindiscreetactivity‐basedmusictherapytreatmentcouldinfluencehabituation(entrainment)toregulatedsystemicinnerrhythms,coordinatingpacing,reducingstress,anxiety,andrepetitivebehaviorsandyieldingeye‐contact,attention,motor‐planning,andmemory.SixsubjectsontheSpectrum,ages8‐12withminimalexpressivelanguage,receivedeight45‐minuteindividualtherapysessionstreatedwithfourdifferentrhythminterventionsaddressingbreathcontrol,regulationofarmmovements,upper‐lowerbodycoordination,anddrumming.Eacheventwasrepeatedfourtimeswithinthesessions,toarhythmictempopatternat60‐beatsperminute.ALifeshirtheartmonitorvestwithembeddedwirelesssensorswaswornbyeachsubjectinthefirst,fifthandeightsession,tomonitorandprovidevisibleaccountingofheart‐rateactivityduringthosethreesessions.Resultsappeartoindicatevariouslevelsofpulseentrainment,andexcellentprogressandregulationintaskundertakingandsequenceretentionbyeachofthesixsubjects,increasesinmotorplanningabilities,visualcontact,attentionandreductionofrepetitivebehaviorswerealsoindicated.HeartRatedataoverthethreesessionsinwhichthevestwasworn,displaythatalevelofentrainmentandregulationwastakingplace.Resultstendtosupportthehypothesisthathighlystructured,tempo‐specificrhythmictasksataslowtempocanbringaboutsystemicpacingtoredirectorreduceanxietybehaviorsandyieldfunctionaladaptation.

Page 68: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:68A.3Effectsofmusicalcontextontheperceptionofpitchdifferencesbetweentonesofsameanddifferenttimbre.ElizabethM.O.Borchert*,AndrewJ.OxenhamUniversityofMinnesota,MinneapolisUSA*=Correspondingauthor,olsen064@umn.eduPitchplaysanimportantroleincomplexauditorytaskssuchaslisteningtomusicandunderstandingspeech,yetlisteners’abilitytocomparethepitchoftwotonescanbequitepoorwhenthetonesalsodifferintimbre.Thisstudyinvestigatedtheextenttowhichthedifficultiesencounteredwhenjudgingisolatedtonesaremitigatedinthepresenceofamusicalcontext.Weusedmeasuresofsensitivity(d’)andresponsetimestodeterminetheeffectsofvarioustonecontextsonlisteners’abilitytodetectsmallchangesinpitchbetweentwotonesthathadeitherthesameordifferenttimbre(basedonspectralshape).Aseriesofthreeexperimentswereconducted,eachofwhichincludedatleast18subjects(musicaltrainingfrom0to15years).Wefoundthatadescendingdiatonicscalegenerallyproducedasmallimprovementinbothsensitivityandresponsetimewhenlistenerscomparedtonesofdifferenttimbre,butnottonesofthesametimbre.Theimprovementappearedtodependonthefamiliartonalityofthecontext,asadescendingwhole‐tonescalefailedtoyieldsimilarimprovements.Althoughtheeffectofatonalcontextwassignificantinthefirsttwoexperiments,itfailedtoreachsignificanceinthethirdexperiment.Asthethirdexperimentwasalsothelongest,itmaybethatthresholdsintheno‐contextconditionimprovewithsufficienttrainingtothepointthatatonalcontextnolongerprovidesanyadditionalbenefit.Finally,wefoundthatperformancewasnotaffectedbywhetherthediatonicorwhole‐tonecontextnoteswerealwayspresentedinthesame(descending)orderorwhethertheywerepresentedinavariableandrandomorder,suggestingthatoverlearnedtonalhierarchiesmayplayamoreimportantrolethantheshort‐termpredictabilityofthecontextsequence.[SupportedbyNIHgrantR01DC05216.]A.4TheEffectofVoiceNumerosityonPerceivedMusicalLonelinessYuriBroze(1)*,KateGuarna(1),BrandonPaul(2)(1)OhioStateUniversitySchoolofMusic,ColumbusOH,USA,(2)OhioStateUniversityDepartmentofSpeechandHearingScience,ColumbusOH,USA.*=Correspondingauthor,[email protected]—suchasmode,tempo,anddynamiclevel—havebeenreliablylinkedtosubjectivereportsofdifferentperceivedmusicalemotions.Inthepresentstudy,weinvestigatepossibleemotionaleffectsofvoicenumerosity—theinstantaneousnumberofconcurrentlysoundingmusicalvoicesinpolyphonicmusic.Itisplausiblethatvoicenumerositycouldbeanimportantfactorintheperceptionofthosemusicalemotionswhichimplysocialcontexts.Inparticular,onemightexpecttheperceptionofmusicallonelinesstobestrongestformusicalexcerptswithfewerconcurrentmusicalvoices.Totestthishypothesis,weaskedlistenerstoratebrief(~5s)musicalexcerptsforperceivedhappiness,sadness,pride,andlonelinessusingasliderinterface.Stimulirepresentingconditionsofone,two,three,orfourconcurrentvoicesweredrawnfromJ.S.Bach'sTheWell­TemperedClavier,performedbyaprofessionalharpsichordist.Sincefugalexpositionischaracterizedbythestepwiseaccumulationofmusicalvoices,itwaspossibletoobtainsetsofstimulimatchedformode,tempo,andmotiviccontent.Limitationsoftheharpsichord'spluckingmechanismguaranteedconsistentdynamiclevelandtimbre.Whilefullresultswereunavailableatthetimeofsubmission,preliminarydatasuggestthatvoicenumerositycanindeedimpacttheperceptionofcertainmusicalemotions.

Page 69: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:69A.5TheRightPlaceattheRightTime:AnAnalysisofSpectralandTemporalPitchMechanismsUsingEvent­RelatedPotentialsandSourceLocalizationBlakeButler(1)*,LaurelTrainor(1)(1)McMasterUniversity,Hamilton,Canada*=Correspondingauthor,BUTLERBE@MCMASTER.CASoundswithpitchtypicallyhaveenergyatafundamentalfrequencyandintegermultiplesofthatfrequency.Pitchisderivedbytheauditorysystemthroughcomplexspectrotemporalprocessinginwhichharmonicsarecombinedintoasinglepitchpercept.Recentstudiessuggestpitchisrepresentedbeyondprimaryauditorycortex,inaregionnearitsanterolateraledge.Althoughpitchresearchhasfocusedonthespatialcodeoriginatinginthecochlea,pitchinformationisalsorepresentedinatemporalratecode.Herewecompareevent‐relatedresponses(ERPs)derivedfromEEGtoapitchchangeusingstimulithattargetoneorbothofthesemechanisms.Eachoftwelvesubjectsparticipatedinthreelisteningconditionswithpuretones,complextones,oriteratedripplednoise(IRNcontainstemporalpitchcuesintheabsenceofspectralcues;Yost,1996).Ineachcondition,astandardstimulus(perceivedpitchof167Hz)waspresentedon85%oftrials,andanoddballstimulus(200Hz)waspresentedonremainingtrials.Reponseswereaveragedforeachparticipantanddifferencewaveswerecreatedbysubtractingtheresponsetothecontrolstimulusfromtheresponsetotheoddball.RepeatedmeasuresANOVAsrevealedasignificantdifferenceinMMNpeakamplitude(p<0.001).T‐testsrevealedthatpeakamplitudewassignificantlygreaterintheIRNconditionthanthecomplexcondition(p=0.007),whichwasinturnsignificantlygreaterthanthepuretonecondition(p=0.002).SourcelocalizationusingBESAisongoing;howeverpreliminaryresultsshowsmallbutsignificantdifferencesinthesourcelocationoftheMMNresponsetochangesinthedifferentconditions.TheseresultssuggestthatMMNresponseselicitedbypitchchangesarehighlydependentonthetypeofcuespresent.Complextones,containingbothspectralandtemporalcues,appeartobeanalyzedinsomewhatdifferentcorticalregionsthanIRNstimulicontainingprimarilytemporalcues.A.6EyeblinksasanIndexofAttentionalChunkinginMusicListeningMatthewCampbell*,NiallKlynTheOhioStateUniversity,Columbus,Ohio,USA*=CorrespondingAuthor,[email protected]

Physiologicalresearchmeasuringgrossautonomicresponsestomusiclisteninghavefocusedalmostexclusivelyoninducedemotionalstates,asopposedtocognitiveprocessing,asindicatedbyvariationsinheartrate,GSR,respiration,andGSM.Followingtheisolationoftheendogenous(asopposedtoreflexiveorvoluntary)eyeblink,blinkingratehasbeenshowntoindexavarietyofinternalstatesandprocessesincludingarousal,emotion,cognitiveload,anddeception.Inadditiontovisualtasks,purelyauditoryconditionshavealsobeenshowntoproducepredictablevariationsinblinkrateandfixationduration.Mostrecently,inNakanoetal.’s2009,“Synchronizationofspontaneouseyeblinkswhileviewingvideostories,”withinandbetweensubjectblinkingsynchronywasfoundtostronglycorrelatewith“lowinformation”pointswhenviewingvisualnarratives,indicatingasharedmechanismfortheimplicitdetectionofnarrative“lulls”and/orcollectivechunkingtomanagecognitiveload.Thepresentstudyexploresthepossibilityofendogenousblinkingasacross‐modalindicationofattentionalchunkinginmusiclistening,usingNakanoetal.’s2010study“Eyeblinkentrainmentatbreakpointsofspeech”asamodel.Init,researcherspresentedeachsubjectwithvideoclipscontainingasinglespeakerdeliveringamonologueinthreeconditions:audio+video,videoonly,andaudioonly,findingsignificantsubject/speakerentrainmentandbetweensubjectsynchronyintheAVconditionbutlittleentrainment(andnomentionofsynchrony)intheaudio‐onlycondition.Thisstudypreparesthreesimilarclipsdepictingaclearlyvisibleandaudiblesingerperformingunaccompaniedanduninterrupted.Duetomusic’smoreinherentlyintermodalcharacteranddifferingloadrequirements,wehypothesizestrongbetweensubjectblinksynchronyatsimilartimeandprosodicintervalsinboththeAVandAOconditions.

Page 70: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:70A.7Theneurochemicalbasisformusicalregulationofemotion,cognition,andhealth:AreviewMonaLisaChanda(1)*,DanielJ.Levitin(1,2)(1)LaboratoryforMusicPerception,CognitionandExpertise,DepartmentofPsychology,McGillUniversity,(2)CenterofInterdisciplinaryResearchinMusicMediaandTechnology(CIRMMT)*=Correspondingauthor,mona.chanda@mail.mcgill.caMusichasstrongemotionalandsocialsignificanceformanypeoplehowever,thephysiologicalbasisforitspowerfulimpactisnotwellunderstood.Manypeopleusemusicasameansofregulatingmoodandarousal,muchastheyusecaffeineoralcohol.Musicisusedtoevokeawiderangeofemotions(i.e.joy,excitement,relaxation,etc.),enhanceconcentrationandcognition,toimproveattentionandvigilance,andtomotivate.Thereisalsoanemergingbodyofevidencethatmusicalinterventioninclinicalsettingshasbeneficialeffectsonpsychologicalandphysicalhealthandwell‐being–lendingcredencetotheadagethat“musicismedicine”.Thesediversefunctionsofmusicaregenerallybelievedtoreflectitswidespreadeffectsonneurotransmittersystemsregulatingmotivation,emotion,socialaffiliation,attentionandcognition.However,directscientificinquiryintotheneurochemicalprocessesunderlyingmusicperceptionwarrantfurtherinvestigation.Thepresentworkexaminesthecurrentstateofknowledgeregardingtheneurochemicaleffectsofmusicinthehopeofstimulatingfurtherinterestinthispromisingfield.Wereviewstudiesinvestigatingthecontributionofdifferentneurotransmittersystems(e.g.dopamine,adrenaline,norepinephrine,serotonin,oxytocin)tomusicalperception,andtheneurochemicalbasisforthebeneficialeffectsofmusiconhealth‐relatedoutcomes,includingendocrinemarkersofstressandimmunesystemfunctioning.Wedevelopaconceptualframeworkandmakerecommendationsforfuturescientificinquiry.Wehopeto:(a)advanceourunderstandingofthephysiologicalmechanismsunderlyingthestrongemotionalandsocialimportanceofmusicforhealthyindividuals(b)provideanempiricalbasisforthebeneficialeffectsofmusictherapyand(c)makeinformedrecommendationsforclinicalpracticeusingmusicforarangeoftherapeuticoutcomes,bothphysicalandpsychological.A.8SemanticPrimingEffectsintheCross­communicationofEmotionalConceptsbyMusicandLanguageJacobMorgan*,TseeLengChoy,&JohnConnollyMcMasterUniversity,Hamilton,Ontario,Canada*=Correspondingauthor,morgaj5@mcmaster.caThisstudyinvestigatedthecomprehensionofemotionalconceptsasconveyedbybothmusicandlanguage,respectively,intandem.Previousworkhasdemonstratedthatmusicandvisuallanguagecancross‐communicatesemanticmeaning,suchthatmusicalexcerptsdeterminedtoconveyagivenconceptcanprimeawordwhichdenotesthatconcept,andviceversa.Thereisastrongindication,withoutacompleteunderstandingastowhyorhow,thatmusicenjoysaprivilegetohumanemotion.Thecurrentstudyhasfound–forthefirsttime–thatexcerptsofmusicinfluencehowthemeaningofsubsequentauditorywordsthatdenotedemotionalconceptswasprocessed.Abriefmusicalexcerptwasfollowedinquicksuccessionbyaspokenword,either“happy”or“sad”.Participantswereaskedtoindicatewithabuttonpresswhethertheemotionconveyedbythemusicandtheemotiondescribedbythewordwereamatchoramismatch.Fasterresponsetimeswereobservedinthematchconditionascomparedtothemismatchcondition,whichindicateastrongprimingeffect,meaningthattheemotionconveyedbythemusicfacilitatedtheprocessingoftheword.Event‐relatedbrainpotential(ERP)indicesareespeciallywell‐suitedtomeasuresemanticprimingeffectsbecausetheyrevealneurocognitiveprocessesonlinewithexquisitetemporalresolution.Itisthereforepossibletotrackaspectsofcognition,suchascomprehension,astheyunfoldintime.TheERPprofilecorroboratedthebehaviouralresults,showingdisruptionsofrecognitionandcomprehensioninthemismatchconditionthatwerenotexhibitedinthematchcondition.Theseresultsshowthatthecognitionofemotionalconceptscanbesimilarlycommunicatedbybothmusicandspeech.Bothmusicandlanguageareparallelifnotoverlappingconduitsforthecognitionandmeta‐cognitionofemotionalconcepts.Thesefindingsmaybeextrapolatedtoclinicalapplications,particularlyinappraisingcomprehensionandawarenessinnon‐communicativepatients.

Page 71: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:71A.9ExplicitandImplicitKnowledgeofRockHarmonyinNonmusicians*LincolnG.Craton,WyattDonnelly‐Landolt,LauraDomanico,ErikMuhlenhaupt,andChristopherR.PoirierStonehillCollege,Easton,MA,USA*=Correspondingauthor,lcraton@stonehill.eduThewayapieceofmusicplayswithourexpectationsisattheheartofouraestheticresponsetoit(Huron,2006;Meyer,1956).Toourknowledge,thepresentstudyisthefirsttoexplorelisteners’explicit(Experiments1and2)andimplicit(Experiments3and4)expectanciesforRockharmony.InExperiment1,20untrainedlistenersprovidedlikingandsurpriseratingsfor31(major,minor,ordominant7)targetchordsplayedafterabriefmusicalcontext(majorscaleormajorpentatonicscale+tonicmajortriad)oranon‐musicalcontext(whitenoise+silence).Recentmusic‐theoreticalaccounts(e.g.,Stephenson,2002)andastatisticalcorpusanalysis(Temperley&deClercq,2010)indicatethatRockmusic(construedbroadlytomeanmostrecentpopularmusic)utilizesallthechordsofCommonPractice,plusmanythatviolateCommonPractice.Thus,themanipulationofgreatestinterestwasharmonicsystem:1)traditionaldiatonicchords,expectedinbothCommonPracticeandRockmusic;2)rock­onlydiatonicchords,unexpectedinCommonPracticemusicbutexpectedinRock;and3)nondiatonicchordslyingoutsideeitherharmonicsystem.Formajorchordsonly,likingratingsinboththerock­onlydiatoniccondition(M=6.20,SD=1.45)andinthetraditionaldiatoniccondition(M=6.18,SD=1.66)werehigherthanthoseinthenondiatoniccondition(M=4.91,SD=1.08),t(19)=5.32,p<.001,andt(19)=4.19,p=.001,respectively.UntrainedlistenersthuspossesssomeexplicitharmonicknowledgeofbothCommonPracticeandRockharmony.Experiment2wasidenticalexceptformusicalcontext(minorpentatonicormixolydianscale+tonicmajortriad).Experiments3and4usedtheharmonicprimingparadigm(Bigand&Poulin‐Charronat,2006)tomeasureimplicitharmonicknowledge.A.10ApplyingPrinciplesofMusicSceneAnalysistoSimultaneousAuditoryWarningSignalsMatthewDavis(1)*(1)TheOhioStateUniversity,ColumbusUSA*=Correspondingauthor,davis.3131@osu.eduInemergencysituationsaircraftpilotsareoftenpresentedwiththedifficulttaskofdistinguishingbetweensimultaneousauditorywarningsignals.Aninabilitytoeffectivelydiscriminatebetweensynchronouswarningscanleadthepilottoignorecertainsignals,tomisinterpretthem,ortobesimplyunawareoftheirpresence.Thecreationofsignalseasilydistinguishablefromothersimultaneoussignalswouldnotonlybedesirablebutwouldcontributetoincreasedsituationalawarenessandbetterjudgmentduringemergencies.Whilemuchresearchhasbeenconductedexaminingtheappropriatepropertiesofwarningsignals,itwouldbeprudenttoexaminethecontributionsofauditorystreamingresearchinrelationtomusic.Theabilityoflistenerstocorrectlyidentifysynchronousmelodicstreamshasbeenwelldocumentedandhasbeentakenadvantageofbycomposersforhundredsofyears.Forinstance,manypiecesofmusiccontainoverfivemelodicallyindependentsynchronousstreamsallwhilebeingharmonicallyrelatedandintegratedintotheoverallcharacterofthepiece.Thiscanbeaccomplishedbymanipulatingtheonsetsynchrony,timbre,location,rhythmicidentities,amplitudemodulation,temporalcontinuity,andpitchproximity,tonameafew.Byapplyingtheseprinciplestowarningsignals,thisstudyhassoughttocreateasystemofauditorywarningsthatcontainmoreefficientdifferentiatingpropertiesinadditiontoconformingtoamoreunifiedstylisticentity.

Page 72: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:72A.11Anewmodelofperceivedinformationcontentinmelodicandnon­melodiclinesBenDuane(1)*(1)NorthwesternUniversity,Evanston,IL,USA,ben­[email protected]*=Correspondingauthor,benjaminduane2012@u.northwestern.eduLittleisknownabouthowlistenersdistinguishbetweenmelodies,counter‐melodies,accompaniments,andothermusicallines.Whileafewresearchers(Madsen&Widmer2006,Duane2010)havefoundevidencethatinformationcontentisafactor—thatmelodiessoundlikemelodies,forinstance,becausetheycontainrelativelyhighinformation—thisworkisinitsinfancyandfallsshortonseveralcounts.First,itconcernsitselfonlywithsinglenotes(orpairsofadjacentnotes),eventhoughlistenerscanlikelyprocessinformationinamoreglobalmanner.Second,itaccountsonlyforone’sknowledgeofthepiecebeingheard,notanylargerknowledgeofmusicalstyle.Third,theseresearchersfailtoadvanceaplausiblehypothesisabouthowmusicalinformationisperceived,processed,andremembered.Thispaperattemptstoremedytheseshortcomingsbyproposingandevaluatinganewcomputationalmodelofmusicalinformationtracking.Foreachnoteinapiece,themodelcomputesvarioustypesofinformationcontent—somebasedonthecurrentpiece,othersbasedonacorpusdesignedtoreflectstylisticknowledge.Theformerarebasedonlyonthemusicprecedingthenoteunderanalysis,simulatingthelistener’sevolvingknowledgeofthepieceasitunfolds.Themodelcanalsocomputeinformationatseveralprobabilisticorders,allowingnotestobeanalyzedinthecontextofoneormoreprecedingnotes.Arunningsum,weightedbyaninverseexponentialfunction,simulatestheaccrualanddecayofinformationinalistener’smemory.Thismodelwastestedonseveralstringquartets,thelinesofwhichwerelabeledaseitherprincipal(e.g.melodic),secondary(e.g.counter‐melodic),oraccompanying.Adiscriminantanalysisdemonstratedthatthemodel’soutputcouldbeusedtodistinguishbetweenprincipal,secondary,andaccompanyinglineswithabove‐chanceaccuracy.Theseresultsareconsistentwiththehypothesisthatlistenersbasesuchdistinctionsonmusicalinformationcontent.A.12Neuralprocessingofdissonancedistanceinmelodiesasrevealedbymagnetoencephalography(MEG)RogerDumas*1,2,3,ArthurLeuthold1,2,ApostolosGeorgopoulos1,2,3[1]BrainSciencesCenter,VAMC,Minneapolis,MN,USA[2]Neuroscience,UofMinnesota,Minneapolis,MN,USA[3]CenterforCognitiveSciences,UofMinnesota,Minneapolis,MN,USA**=Correspondingauthor,[email protected](MEG)toinvestigatethebrainmechanismsunderlyingtheprocessingofdissonanceintonesequences.Tenhumansubjectslistenedtoanaudiofileof10tonesequencesof60‐sdurationeach.Eachsequencecomprisedarandompermutationof240puretones(250ms/tone)fromasetofpitchesinthekeyofCmajor(2‐octaverange:freq.261.6Hz‐987.86Hz).Thesesequencesweredesignedtohaveserialcorrelationsfrom0.0to0.9over5lagstoproduceawidevariationindifferencesbetweensuccessivetones.MEGactivitywasrecordedusing246sensors(axialgradiometers,Magnes3600WH,4‐DNeuroimaging)at1017Hz.Weevaluatedthebrainrepresentationofdissonancecalculatedin5tonalspaces:a)theabsolutevalueofthelogofthesoundfrequency/ratio(SF),b)Longuet‐Higgins(LH)space,c)a4‐dimensionalcompositespacedevelopedbyRogerDumas(P4helix,P4H),d)KrumhanslandKessler'sprobetoneprofile(KK)ande)Parncutt'sweightedmeasureofchromasalience(P88).Withrespecttotheneuraldata,successiveabsolutedifferencesintheMEGsignal(Nd)werecalculatedbetweennotemeans.Withrespecttothenotes,Euclideandistances(Ed)werecalculatedbetweensuccessivenotes.Wewantedtoassesstherelationsbetweenchangesinneuralactivitytotonechangesineachoneofthefivedifferenttonalspaces.Forthatpurpose,wecarriedoutanautoregressiveanalysis(toaccountforpossiblycorrelatederrors),wherethedependentvariablewasNdandtheindependentvariablewasEdin5separateregressions.Theresultsyieldedfivebrainmaps,correspondingtotheindependentvariables.Wefounddenseclustersofstatisticallysignificantrelationsforallfivevariables.Clustersinbilateralsuperiortemporalgyruswerepositive,whereasclustersinposteriorparietalareasshowedapositive/negativeasymmetry.

Page 73: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:73A.13DoesMusicInduceorJustRepresentEmotions?TheRoleofEpisodicMemoriesinEmotionalResponsestoMusicLauraEdelman*,PatriciaHelm,AlanBass,LauraBrehm,MelissaKatz,&MelissaWolpowMuhlenbergCollege,Allentown,PA.,USA*=Correspondingauthor,[email protected](2009)distinguishesbetweenmusic’srepresentationandinductionofemotions.Hunter,Schellenberg,andSchimmack(2010)foundrepresentationratingswerehigherthaninductionratings.Ourfourstudiesshowthathavingamemoryassociatedwithasongincreasedbothrepresentationandinductionofemotioninmusic.Thefirsttwostudiesusedpopularmusicthatvariedinemotionaltoneandfamiliarity.Participantsratedthemusiconseveralscalesincludingthedegreetowhichitrepresentedtheemotionandthedegreetowhichtheparticipantactuallyfelttheemotion(onasevenpointscalewithhighernumbermeaningmorerepresentativeormoreinduction).Participantsalsoindicatedwhethertheyhadamemoryassociatedwiththepieceandwhetherthatmemorywaspositive,negative,orneutral.Associatedmemoriesincreasedboththerepresentationandinductionratings.Forexample,anegativefamiliarsongwasrated4.5forrepresentationand4.4forinductionwhennomemorywasassociatedwithit,havingapositiveornegativememoryassociatedwiththesongraisedtheratingsto6.1and6.3or6.3and5.7forrepresentationandinduction,F(18,147)=1.76,p<.01.Inthethirdandfourthstudytheresearchersusedclassicalpiecesinsteadofpopularmusictoeliminatethelyrics.Theresultswereverysimilarandtherewasalsogenerallyadecreaseinthedifferencebetweenthetwofactorswhenamemorywaspresent.Forexample,“FuneralMarch”byChopinhada0.7differencebetweenrepresentationandinductionratingswhentherewasnomemoryassociationwiththepiece;M=5.5,M=4.8.Whentherewasapositivememory(M=7.0,M=7.0)ornegativememoryassociation(M=6.4,M=6.3),therangebetweenthetwoscoresdecreased.Thecurrentstudiesdemonstrateacomplexrelationshipbetweenrepresentationandinductionofemotioninmusic.A.14MusicalFan’StereotypesActivationandMentalPerformanceMarekFranek(1)*,RomanMlejnek(2),JanPetruzalek(1)(1)FacultyofInformaticsandManagement,UniversityofHradecKrálové,CzechRepublic,(2)InstituteofMusicology,FacultyofArts,CharlesUniversity,Prague,CzechRepublic*=Corresopndingauthor,marek.franek@uhk.czLargebodyofstudiesrevealedthatactivationofacertainsocialstereotypemayinfluenceconsequentbehaviororperformance.Thepresentedstudyinvestigatedaneffectofstereotypesofmusicalfans.Thereareevidencesthatthesocialstereotypeofclassicalmusiclistenerisassociatedwithhigherintelligencelevelcomparedtostereotypesoffansofcertaingenresofpopularmusic.Thegoalofourstudywastoinvestigateaneffectofactivationofmusicalfansstereotypesonamentalperformance–learningwordsinaforeignlanguage.Apreliminaryinvestigationwasconducted,whichshowedthatthehighestscoresofperceivedintelligencewereassociatedwithlistenersofclassicalmusicandjazz,whilethelowestscorewithfansoftechno.Next,anexperimentstudyinganeffectofactivationofclassicalmusicortechnofan’stereotypesonamentaltaskwasconducted.88subjects(56females)aged19‐22yearstookpartintheexperiment.First,slideswithtypicalformsofbehaviorsofclassicalmusicortechnomusiciansandlistenersinacourseofaconcert/performancewerepresented.Further,theparticipantswereaskedtowritedownashortreportabouttypicalformsofbehaviorofclassicalmusic/technofans.Next,duringsixminutesperiodtheparticipantswereaskedtolearntwentyLatinwords.Finally,theyweretestedfromtheirknowledge.ANOVArevealedthesignificanteffectofthetypeofstereotypeactivationonperformanceinthementaltask.Neithergendernorinteractionshadsignificanteffects.Resultsindicatedthattheactivationofstereotypeofclassicalmusiclistenerspriortomentaltaskresultedinaslightlybetterscoreinthetestoflearningforeignwordscomparedtoactivationofthestereotypeoftechnofans.Thedifferencesbetweenvariousformsofstereotypeprimingandtheireffectsonperformancewerediscussed.

Page 74: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:74A.15WhenPushComestoShove:RhythmicBehaviorinImprovisedWalkingBassOferGazit(1)*,EricBattenberg(2),DavidWessel(3)(1)DepartmentofMusic,UniversityofCalifornia,Berkeley,U.S.A(2)ParallelComputingLaboratory,UniversityofCalifornia,Berkeley,U.S.A(3)TheCenterforNewMusicandAudioTechnologies,UniversityofCalifornia,Berkeley,U.S.A*=Correspondingauthor,ofergazit@berkeley.eduTheaimofthisstudyistoinvestigatewhetherimprovisationaffectssynchronizationbetweenbassanddrumsinperformanceofstraight‐aheadswing.Previousstudiesofcommercialjazzrecordingshaveshownthatbassistsdemonstrateconsistentasynchroniesofonsetsinrelationshiptothepulsesetbythedrummerwhenimprovisinga“walkingbassline”.However,thespecificeffectofimprovisationonbassists'rhythmicbehaviorandsynchronizationstrategiesislargelyunknown.Twelveprofessionalbassistswereaskedtoperformfoursynchronizationtasks,dividedintotwopitch‐confinedparadigms:A“scaleparadigm”anda“jazzparadigm”.Eachparadigmincludesamemorizedcontroltaskandanimprovisedtask.Eachtaskwasperformedinthreemodes:1)self‐synchronizedperformanceofthetask,2)aperformanceofthetasksynchronizedwitharidecymbal,and3)a“double‐time”performanceofthetasksynchronizedwithacymbal.Thisexperimentaldesigncombinesasensorimotorsynchronizationresearchmodelwithcommonstylisticproceduresinjazztoallowanalysesindifferentmodesandtempi,bothonglobalandlocallevels.Itishypothesizedthattheallocationofsufficientcognitiveresourcesfortheproductionofimprovisedwalkingbasslineswillaffectbassists’synchronizationbehavior.Morebroadly,itsuggeststhatbehavioralanalysisofrhythmicallyconstrainedcircumstancescanilluminatesomeofthecognitivemechanismsthatgovernjazzimprovisation.A.16DirectionalAsymmetryinTonalSpaceandtheDramaticUseofbII:TheoreticalUnderpinnings,EmpiricalEvidence,andMusicalApplicationsBrunoGingras(1)*(1)DepartmentofCognitiveBiology,UniversityofVienna,Vienna,Austria*=Correspondingauthor,brunogingras@gmail.comTheendingsofnumerousmusicalworksfromthecommon‐practiceeraexhibitapronouncedemphasisontheNeapolitanchord,and,moregenerally,thebIItonalarea,oftenaspartofanexpandedcadentialprogressionleadingtotheconcludingcadence.SuchtonicizationsofthebIItonalareaareoftensetoffthroughtheuseofdramaticpauses,virtuosoflourishes,orextendedchromaticism,andtendtocoincidewithabreakdownofbothvoice‐leadingandrhythmiccontinuity.Thecombinedeffectofthesetexturalchangesisastrikinginstanceofconcinnity(LaRue,2001),inwhichallelementsofthemusicallanguagecombinetoreinforcetheappearanceoftheNeapolitanchordasa“show‐stopper”whichtheninexorablyleadstothefinalcadence.Here,Isuggestthatcomposers’strategicuseoftheNeapolitansixthmayoweitseffectiveness,atleastinpart,totheperceptualanddirectionalasymmetrybetweenmodulationstothe“flatside”andmodulationstothe“sharpside”ofthecycleoffifths(Rosen,1971).Werts’(1983,1997)classificationofmodulatingharmonicprogressionsintroducedatheoreticalframeworkthataccountedforthisasymmetry.EmpiricalvalidationforWerts’modelwasprovidedbyCuddyandThompson(1992),whoreportedthatmodulationstothe“flatside”effectaclearerperceptualshiftintonalorganizationthanmodulationstothe“sharpside”.Woolhouse’s(2009)modeloftonalattractionalsopredictsatonalasymmetrysimilartoWerts’model.IpresenttheoreticalsupportforthehypothesisthatanabrupttonicizationofthebIItonalareamaximizestheperceptualimpactofthisdirectionalasymmetry,andsuggestthatthisisexploitedmusicallytoconveyanimpressionoftonalremotenesswhich,bycontrast,highlightsthesenseofclosurebroughtaboutbythefinalcadence.Finally,IsubmitanexperimentalmodeltotestthishypothesisanddiscussrelevantmusicalexamplesfromJ.S.BachandBeethoven.

Page 75: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:75A.17TheEffectofOrchestrationChangesonContinuousResponsesofEmotionalIntensityMeghanGoodchild(1)*,JonathanCrellin(1),JonathanWild(1),StephenMcAdams(1)(1)CentreforInterdisciplinaryResearchinMusicMediaandTechnology(CIRMMT),SchulichSchoolofMusic,McGillUniversity,Montreal,Canada*=Correspondingauthor,meghan.goodchild@mail.mcgill.caThisprojectseekstomodeloneaspectofthetemporaldynamicsofmusiclisteningbyinvestigatingtheconceptofpeakexperience,whichconsistsoftwoormorecoordinatedaffectiveresponses,suchaschills,tears,emotions,awe,andotherreactions.Peakexperiencesrelatingtomusichavebeenstudiedthroughextensiveretrospectiveself‐reports;however,subjectivememoriesmaychangeovertime,participantshavedifficultyverbalizingtheirineffablenature,andthereislimitedtemporalinformation.Inthisstudy,weexploreemotionalresponsesasexperiencedinrealtime,andascertaintheroleofmusicalcontext.Previousresearchsuggeststhattimbralcontrasts(e.g.,suddenshiftsinorchestration)induceemotionalresponsesinlisteners,buttimbrehasnotbeentheorizedinmusicresearchtothesameextentasotherparameters.Musicalstimuliwerechosentofitwithinfourcategoriesdefinedbytheresearchersbasedoninstrumentationchanges:gradualorsuddenaddition,orgradualorsuddenreductionininstruments.Forty‐fiveparticipants(22musiciansand23nonmusicians)listenedtotheorchestralexcerptsandcompletedanexplicitbehaviouraltask,whichinvolvedcontinuouslymovingaslidertoindicatethebuildupanddecayoftheintensityoftheiremotionalresponses.Theyalsocompletedquestionnairesoutliningtheirspecificsubjectiveexperiences(chills,tears,awe,actiontendencies,andotherreactions)aftereachexcerpt.Musicalfeatures(tempo,eventonsetfrequency,loudness,instrumentation,texture,roughnessandvariousspectralpropertiesoftheacousticsignal)werecodedastimeseriesandusedaspredictorsofthebehaviouraltimeseriesinalinearregressionmodeltoexploretherelationshipsbetweenperceptualandmusical/acousticaldimensionsandtoquantifyelementsofthetemporalityoftheseexperiences.Wewilldiscussresponsepatternsspecifictothevariousmusicalparametersunderinvestigation,aswellasconsiderindividualdifferencescausedbyfactorsincludingmusicaltrainingandfamiliaritywiththestimuli.A.18WhatmusicaldistinctivenesssaysabouttheorganizationofcompositionalmemoryEliotHandelman*CIRMMT,McGillUniversity,Montréal,Québec,Candada*=Correspondingauthor,eliot@colba.netComposersmustproducenewmusicthatneitherrepeatsexistingmusic,normaynotbeconstruedbyconnoisseursasaclosevariationofexistingmusic,accordingtoculturalstandardsoforiginality.Despitetheenormousburdenthusplacedonmemoryandinvention,non‐derivativecomposers(here,J.S.Bach)neverseemtoinadvertentlyreplicateexistingmusicinpartorwhole:appropriation,probablyconscious,impliesimprovement.Theproblemishowthecomposer'smemorymaybeorganizedandmanagedinordertotoaccomplishthis.Memoryoperationcanbewitnessedinthecreativeproblemofattainingdistinctivenessinsetsofworks.Distinctivenessimpliesdifferentialtreatmentofespeciallysalientshapesthatoccurinallmusic.Wearguethatthemostsalientshapesarenecessarilythesimplest,andthatanemergentstructureofsimplicity,inthesenseofElizabethBates,isavailablethroughintuitionratherthanculturaltransmission.Thisismodeledasarecursiveshapegrammarwhosebaseelementsaretheenumerablesetofsimplestthingsthatcanbedonewithmusicalmaterial.Thesimplestclassofshapessoobtainedareeasilydemonstratedtoexistinallmusicalcultures.Nevertheless,nohistoricalrubricexistsforthiscategory:thecreativemanagementofsimplicitymaywellbeintuitive.Thisstrengthenstheimplicationthatthereisaparallelbetweencompositionalmemoryandtheorganizationofpatternspromotingdifference.AppliedtothesoloviolinandcellomusicofJ.S.Bach,ourmodelrevealsrobust_avoidance_ofsimilarly‐patternedsimplestructuresinacombinatorialmatrixgoverningbodiesofworksthatareheardasuniqueanddistinct.Thisresultoffersanargumentthatcompositionalmemoryisproceduralratherthansemantic:ratherthanrememberingmusicasasuccessionofindividualnotes,musicisrememberedasstructureswhichimplicitlyofferdynamicpotentialfordifferenceandvariation,muchaschessplayersviewconfigurationsofthechessboard.

Page 76: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:76A.19AnAcousticModelforChordVoicingsinPost­TonalMusicRobertHasegawa*EastmanSchoolofMusic,Rochester,NY,USA*=Correspondingauthor,[email protected]‐tonalmusicfrequentlytreatpitchesaspitchclasses,abstractedfromanyspecificregister.Thegeneralizingpowerofthistheoreticalchoicehasledtoremarkableadvances,butatthecostofoverlookingtheeffectofregisteronourperceptionofcomplexharmonies.Drawingonpsychoacousticresearch,thispaperpresentsamodelfortheanalysisofpost‐tonalchordvoicingsthatissensitivetobothpitch‐classandregister.AsrecognizedbytheoristssinceRameau,thereareparallelsbetweentheperceptionofchordsandtheperceptionofsoundswithcomplexharmonicspectra.Ineachcase,listenerstendtounderstandhigherfrequenciesasovertonesoflowerones,andtogroupfrequenciesmatchingthesameovertoneseriesintoasingleGestalt.AccordingtoAlfredBregman,listenersapply“ascene‐analysismechanismthatistryingtogroupthepartialsintofamiliesofharmonicsthatareeachbasedonacommonfundamental.”Ananalogousmechanismcanbedevelopedfortheanalysisofpost‐tonalchordvoicings.Inrelationshiptoagivenroot,eachofthetwelvepitchclassescanbeunderstoodasatemperedapproximationofaharmonicpartial.Anypitch‐classsetcanbevoicedtofittheovertoneseriesofeachofthetwelveequal‐temperedpitch‐classroots.However,notallofthesevoicingsareequallyconvincing:thesenseofharmonicrootednessisconveyedmoststronglywhen(a)pitchesareidentifiedwithharmonicsinthelowerpartoftheovertoneseries,and(b)lowharmonicsthatreinforcetherootarepresentinthechord.Byelucidatingtheperceptualeffectsofdifferentvoicingsofthesamepitchclasses,thisanalyticalmodeloffersafullerappreciationofthevitalroleofregisterinmusicbycomposersincludingBoulez,Messiaen,andKnussen.Theconclusionofthepaperconsiderswaysthatthemodelcouldberefinedthroughexperimentalresearchandtesting.

A.20BraincorrelatesofhappyandsadmusicalemotionsusingragasofHindustaniclassicalmusic:AnelectrophysiologicalstudyShantalaHegde(1)*,ShobiniLRao(1)(1)CognitivePsychology&NeuroscienceLaboratory,CognitivePsychologyUnit,CenterforCognitionandHumanExcellence,DepartmentofClinicalPsychology,NationalInstituteofMentalHealthandNeurosciences,Bangalore,India*=Correspondingauthor,shantalah@nimhans.kar.nic.inThepresentstudywascarriedouttoexaminetheelectrophysiologicalcorrelatesofhappyandsadmusicalemotioninmusicallyuntrainedadults.SixragasofHindustaniClassicalMusic,threetoevokehappyemotionandthreetoevokesademotion.Twoexcerptsfromeachragaformedthestimulusofduration(mean‐duration=129.00,SD=6.00seconds).Ratingsby(n=30)musicallyuntrainedadultsona5‐pointLikertscaleshowedthatstimulusweredistinguishedasconveyinghappyandsademotionsabovelevelofchance.Sampleforthepresentstudyincludedtwentyrighthandedmusicallyuntrainedadults(M:F=10:10,meanageinyears:28.00,SD=4.00).EEGwasrecordedusingtheNeuroscan(SynAmps),samplingrateof256Hz,with30electrodesplacedaccordingthe10/20electrodeplacementsystem.ArtifactfreedatawasanalyzedusingfastFouriertransformation(FFT)withaHanningwindowwith50%overlapandintervalof1023mswitharesolution0.978HZandrangeof500Hz.Alphaasymmetrywascalculatedusingtheformula(lnRightpower‐lnLeftpower).Preliminaryanalysishasshownthatabsolutealphapowerwassignificantlyhigheracrossallchannelswhilelisteningtoragasevokinghappyemotionincomparisontosadragasandeyesclosedrest.ThefrontalEEGasymmetrydidnotdifferbetweenhappyandsademotion.ThisisthefirststudyexaminingelectrophysiologicalcorrelatesofmusicalemotionusingragasofHCMinmusicallyuntrainedlisteners.Detailedanalysisofmusicalpropertiesofragasmaycontributeforbetterunderstandingofpresentresults.Theresultsareimportantinunderstandingbraincorrelatesofmusicalemotion.Brainbasisofmusicalemotionisimperativewithgrowingpopularityofmusicasevidencedbasedtherapeuticmethodsinwiderangeofclinicalconditions.(FundedbytheDepartmentofScienceandTechnologyunderthefasttrackschemeforyoungscientist,GovernmentofIndia,SR/FT/LS‐058/2008,PIHegde).

Page 77: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:77A.21Influenceofpracticedinstrumentsontheautomaticencodingofpolyphonicmelodies.C.Marie,L.Herrington,&L.J.TrainorAuditoryDevelopmentLab,McMasterUniversity,Hamilton,Ontario*=Correspondingauthor,mariec@mcmaster.caWesternpolyphonicmusicistypicallycomposedofmultiplesimultaneousmelodiclinesofequalimportance,referredtoas“voices”.Previousstudieshaveshownthatnon‐musiciansareabletopre‐attentivelyencodeeachvoiceinparallelsensorymemorytracesasreflectedbyMismatchNegativity(MMN)derivedfromMEGrecordings(Fujiokaetal.,2004).MMNisseeninresponsetooccasionalchangesinanongoingstreamofrepetitivesounds.Whenpresentedwithsequencescomposedoftwosimultaneousvoices(melodies)with25%changesinthehighervoiceand25%changesinthelowervoice,listenersshowMMNtobothchanges,eventhoughonly50%oftrialsareunchanged.Interestingly,theMMNforthehighervoiceislargerthanforthelowervoice,suggestingamorerobustmemorytraceforthehigheroftwosimultaneousvoices.Usingasimilarprotocol,thepresentstudytestswhethertheadvantageforthehighervoiceismalleablebyexperience.Specifically,wearetestingwhetherthepitchregisteroftheinstrumentplayedbymusicians(higher/lowervoice)modifiesthedominanceofthehigher‐voicememorytrace.OurinitialresultsshowthatMMN,recordedwithEEG,islargerforchangestothelowerthantothehighervoiceforcellistsandbassistsbutnotforviolinistsandflutists.Theseresultswillinformusabouttheroleofexperienceinencodingmelodiesinpolyphoniccontexts,andparticularlywhetherexperienceplayingthelowervoicecanovercomeaninherentbiasforbetterencodingofthehighervoice.A.22TheInfluenceofAbsolutePitchtoThree­DimensionMentalRotationAndRelatedProcessingCharacteristicsJian‐chengHou(1,2)*,QiDong(1),Qing‐huaHe(1),Chun‐huiChen(1),HeLi(1),Chuan‐shengChen(3),GuiXue(4)(1)StateKeyLaboratoryofCognitiveNeuroscienceandLearning,BeijingNormalUniversity,Beijing100875,PRChina(2)BRAMS,DepartmentofPsychology,UniversityofMontreal,Montreal,Quebec,H3C3J7,Canada(3)DepartmentofPsychologyandSocialBehavior,UniversityofCalifornia,Irvine,Irvine,CA92697­7085,USA(4)FPR­UCLACenterforCulture,BrainandDevelopment,UniversityofCalifornia,LosAngeles,CA90095­1563,USA*=Correspondingauthor,bonjovi_hou@163.comThisarticlediscussedabouttheprocessingcharacteristicsofthepopulationswithAPabilitywhentheyprocessedthree‐dimensionmentalrotation(3DMR)task.ThesubjectsweredividedintoAPsubgroup(withAPability,experimentalsubgroup)andNon‐APsubgroup(withoutAPability,controlsubgroup)andbothofthemparticipatedinthe3DMRtasks.Theresultsshowedthat:(1)TheAPsubjectshadbetter3DMRperformancewhichcouldattributetothebetterabilityofvisuospatialprocessing;APsubjectshadmulti‐processingstrategiesandthevisualandspatialimagery,asanimportantstrategy,improvedthe3DMRperformance.(2)Lingualabilitycouldinfluencethenon‐linguisticcognition,thevisualizationmemoryandthesimilaritybetweentwoobjectscouldinfluencethespatialcognitionandrelatedcharacteristics,theninfluencedtheperformanceof3DMR.

Page 78: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:78A.23TheInfluenceofAbsolutePitchtoToneLanguageWorkingMemoryandRelatedProcessingCharacteristicsJian‐chengHou(1,2)*,QiDong(1),Qing‐huaHe(1),Chun‐huiChen(1),HeLi(1),Chuan‐shengChen(3),GuiXue(4)(1)StateKeyLaboratoryofCognitiveNeuroscienceandLearning,BeijingNormalUniversity,Beijing100875,PRChina(2)BRAMS,DepartmentofPsychology,UniversityofMontreal,Montreal,Quebec,H3C3J7,Canada(3)DepartmentofPsychologyandSocialBehavior,UniversityofCalifornia,Irvine,Irvine,CA92697­7085,USA(4)FPR­UCLACenterforCulture,BrainandDevelopment,UniversityofCalifornia,LosAngeles,CA90095­1563,USA*=Correspondingauthor,[email protected](AP)hasatightrelationshipwithtonelanguagesandthisarticlediscussedabouttheprocessingcharacteristicsofthepopulationsspeakingMandarinwithAPabilitywhentheyprocessedChinesephonological,semanticandunfamiliarTibetanwordformworkingmemory(WM)tasks.ThesubjectsweredividedintoAPsubgroup(withAPability,experimentalsubgroup)andNon‐APsubgroup(withoutAPability,controlsubgroup)andbothofthemparticipatedinthethreevisualWMtasks.TheresultsshowedthattheAPsubgrouphadsignificantlybetterperformancesonphonologicalandsemanticWMtasksthanthatofNon‐APsubgroup,butnosignificanceonwordformWMtask.InthelimitedcapacityofWM,theAPsubgroupcouldprocessphonologythrough‘lexicaltone’andprocesssemanticsthroughmulti‐cognitivestrategies,butshouldneedmoreresourcetoprocessunfamiliarTibetanwordformandthisledtoincreasingloadofWMandnoadvantageincognitivestrategy.TheseresultsreflectthattheadvantageofAPsubgroup’sWMdecreaseswiththeincreasingdifficultyoftasks.A.24VocalRangeNormalizationanditsRoleinthePerceptionofEmotioninDifferentVoiceTypesRandolphJohnson(1)*,ElizabethLagerstrom(2)(1)OhioStateUniversity,Columbus,USA,(2)OhioStateUniversity,Columbus,USA*=Correspondingauthor,randolph.johnson@gmail.comListenersperceivemelodiesasmoreorlessaggressiveorappeasingdependingontheoverallpitchheightofthemelody.Inadditiontothetranspositionofallofamelody’spitches,selectiveraisingorloweringofspecificscaledegreesalsohaseffectsonperceivedemotionssuchassadnessandaggression.Whetherspecificnotesareinflectedlowerwithrelationtoascale,orentiremelodiesareshiftedlowerinaninstrumentalorvocalrange,thereiscross‐culturalevidenceconsistentwiththenotionthat“lowerthannormal”isassociatedwithnegatively‐valencedemotions.Sincepitchhighnessandlownessarerelativetospecificvoicetypes,judgmentsofthetoneofvoicearelikelynormalizedtotherangeofagivenvoice.Thepresentstudyinvestigateslisteners’ability,whenpresentedwithasinglesungpitch,todeterminethepitch’slocationwithintherangeofananonymoussinger.First,werecordedeleventrainedanduntrainedvocalistsastheysangtheirentirechromaticrangeontwovoweltypes:“ah”and“ee.”Then,aseparategroupofparticipantslistenedtoindividualpitchesandestimatedeachnote’srelativepositionineachsinger’srange.Theresultsareconsistentwiththenotionthatlistenerscanusetimbrecuestoinferthevocalrangeofasingerandnormalizetheproducedpitchtotheinferredrange.Participants’note‐positionestimatesaccountedforapproximately70%ofthevarianceofthesungnotes’actualpositionswithinvocalranges.Vocalrangeestimatesof“ee”vowelsexhibitedsignificantlysmallerrange‐estimatediscrepanciesthan“ah”vowels.Weobservednosignificantdifferenceofrange‐estimatediscrepanciesbetweenmaleandfemalelisteners;andnosignificantdifferencebetweenestimatesbyvocalandinstrumentalmusicmajors.Forfuturestudies,weexplorethepotentialforspectralanalysistechniquestouncoverspecifictimbrecuesrelatedtotoneofvoice.

Page 79: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:79A.25Investigatinglisteners’preferenceandbrainresponsesofmultichannel­reproducedpianomusicSungyoungKim(1)*,TomaszM.Rutkowski(2)(1)YamahaCorporation,Hamamatsu,Japan,(2)UniversityofTsukuba,Tsukuba,Japan*=Correspondingauthor,sungyoung@beat.yamaha.co.jpUnderstandingthecomplexcharacteristicsofmultichannelreproducedmusicisamajorchallengeforarecordingengineerwhowantstocreateasatisfyingsoundqualityanddeliverittolisteners.Todate,variousstudiesonthetopichaveinvestigatedphysical,perceptual,andcontextualfactorsthataffectlisteners’preferenceofmultichannelreproducedmusic.Thisstudyextendedpreviousfindingsbyinvestigatingtherelationshipbetweenvariationinlisteners’preferenceandphysiologicalfactorsincludingvariationinthelisteners’brainwaves.Fortheexperiment,werecordedvariouspiecesofsolopianomusicusingthreemultichannelmicrophonearrays,whichcontrolledthestimulitobeconstantintermsofmusicalperformancebutdistinctlydifferentfromoneanother,mainlyinhowtocapturetheacousticalfeaturesandrenderthemforamultichannelreproductionsystem.Sixlisteners,trainedasmusiciansandrecordingengineers,participatedintheexperimentthatcollectedlisteners’preferenceordersamongthreemultichannelpianosoundsandmeasuredtheirelectroencephalographic(EEG)responseswitha128channelshigh‐resolutionsystem.Additionally,electrooculographic(EOG)dataandelectromyographic(EMG)data(forfacialmuscleactivity)wererecorded,alongwithheartrate,breathingrate,andfrontheadfunctionalnear‐infrared(fNIRS)bloodoxygenation.Theresultsshowed,throughthree‐dimensionalEEGsourcereconstruction,thatthestrengthoftheta,alpha,andbetawavesinoccipitalandparietalcorticescorrelatedwiththevariationinquantifiedlisteners’preferenceofmultichannel‐reproducedpianomusic.Theparietalcorticesweretheareasthatcorrespondedtoauditoryspatialinformation.Incontrast,thevariationinoccipitalalphawavepatternssuggestedthatthelevelofrelaxationwasrelatedtoauditorydrivenvisualizationsconsideringthefactthatthelistenerskepttheireyesopenandfocusedonafixedpositioninfrontofthem.Thispilotstudy’sresultsimplicatedthatbothauditoryspatialfeaturesofeachstimulusanditscorrespondingvisualizationprocedurehaveaffectedlisteners’preferenceofmultichannel‐reproducedpianomusic.A.26ThePerceptionofNon­chordTonesvs.UnexpectedChordTonesinTonalMelodies:InfluenceofMelodicContextonImpliedHarmonyPerceptionJungNyoKim*NorthwesternUniversity,Evanston,USA*=Correspondingauthor,jung‐kim@northwestern.eduApreviousstudyontheperceptionofimpliedharmonyintonalmelodies(Kim,2009)showedthatsing‐backreactiontimesforexpectedchordtoneswerefasterthanthoseforunexpectedchordtonesandRTsbecamefasterasimpliedharmonybecameclearer.However,stimuliinthisstudyconsistedofonlychordtones.Thepresentstudyinvestigateshownon‐chordtonesareinterpretedandintegratedwithchordtones.Asinthepreviousstudy,18‐tonemelodieswereconstructedwhosefirst15tonesimpliedI‐V‐I‐ii‐V.However,eachmelodyhadanappoggiaturaandapassingtoneinthe15‐tonecontext.Threetargettones,followingthecontext,consistedofonlytheexpectedtonicchordtonesorincludedanon‐chordtone(anappoggiaturaorapassingtone).Thesenon‐chordtoneswereeitherresolvedornot.Themelodieswerepresentedusingagatingparadigm:Musiciansheardthefirst16tonesinthefirstblock,17tonesinthesecond,andthewholemelodiesinthethird.Theysangthelasttoneofeverysequenceasquicklyaspossibleandtheirsing‐backRTsweremeasured.Thepreliminaryanalysesshowedthatthepresenceofnon‐chordtonesinthecontextinfluencedtheinterpretationoftargettoneswhichwerenotexpectedchordtones.Onthe16th,RTdifferencesbetween‘la’andthetonicchordtonesweresmallerthanthoseinthepreviousstudy.Also,when‘fa’followed‘la’,RTsbecameslower,contrarytotheresultofthepreviousstudy.Thesedifferencessuggestthat‘la’wasinterpretedasanappoggiaturainthepresentstudy,notasanunexpectedchordtoneinthepreviousstudy.Inthepreviousstudy,RTsfor‘fa’werefasterbecauseimpliedharmonybecameclearer.However,inthepresetstudy,RTsfor‘fa’wereslowerbecausetheappoggiaturawasnotresolvedtothetonicchordtone.TheRTtrendsforthepassingtoneonthe17thweresimilartotheresultsfortheappoggiatura.

Page 80: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:80A.27ACriticalExaminationoftheTheoryofTonalHierarchyandArgumentsforaNewTheoreticalFrameworkforExplainingTonalityPerceptionJiChulKimNorthwesternUniversity,Evanston,USA*=Correspondingauthor,jc‐[email protected]’stheoryoftonalhierarchyarediscussedandanalternativetheoreticalframeworkisproposedtobetterexplaintonalityperception.First,Krumhansl’sideaoftonalhierarchyasanontemporaltonalschemaisbasedonanimplicitassumptionthatindividualpitcheventsarethebasicpsychologicalunitsoftonalityperception,betweenwhichsimilarityandpsychologicaldistancecanbemeasured.Thisviewcannotadequatelyexplainthetemporal‐ordereffectsinducedbyanestablishedkeyortheroleoftemporalstructureinestablishingakey.Iproposethattheschematicknowledgeoftonalorganizationsshouldbeaboutperceptuallystableand(partially)closedtonal‐temporalpatterns.Second,theroleofbottom‐upprocessingintheperceptionofrelativetonalstabilityanditsinteractionwithtop‐downprocessingarenotfullyaccountedforinKrumhansl’stheory.Iproposethatthepatternoftonalstabilityestablishedbystimulusstructure(bottom‐uptonalstability)shouldbedistinguishedfromthepatternoftonalstabilityinferredfromtheknowledgeoftypicaltonalorganizations(top‐downtonalstability).Thesetwotypesoftonalstabilityinteractintheacquisitionandactivationoftonalschemataandtheencodingofparticulartonalstructures.Third,pitchsalienceresultingfromfrequentoccurrence,longduration,andothersurfaceemphasisdoesnotalwaysleadtotheperceptionoftonalstability.Tonalstability,whichisthepropertyofpitcheventsfunctioningwithinorganizedmusicalstructures,isperceivedonlywhenindividualtonesareorganizedintocoherentperceptualpatterns.Fromthesearguments,Iproposethattonalityisperceivedwhenindividualpitcheventsareperceptuallyorganizedintotonal‐temporalpatternswithinternalreferencepoints(structuralpitchesandrhythmicaccents),whoseinternalrelationalstructuresaredeterminedjointlybythebottom‐upcuesinstimulusstructureandbythetop‐downinferencesbasedonactivatedknowledgestructure.A.28PitchandEyebrowHeight;aTransculturalPhenomenon?NiallKlyn(1)*,MatthewCampbell(1)(1)CognitiveEthnomusicologyLaboratoryatTheOhioStateUniversity,Columbus,Ohio,USA*=Correspondingauthor,klyn.1@osu.eduAnybehaviorthatfunctionsasasignalshouldbestronglyevidentcross‐culturally,butmoststudiesonF0andfacialexpressionhavefocusedsolelyonrelativelysmallculturalgroups.Furthermore,themethodologyemployedinmostexperimentalstudiesbynatureoftenallowsfordemandcharacteristicstocreepintothedataacquisitionprocess.ThepresentstudyexploredthepossibilityoftransculturalF0‐eyebrowheightcorrelationwhileattemptingtoeliminatepossibledemandcharacteristics.Usinganexistingbodyofmulticulturalvideowithaudio‐TheJVC/SmithsonianFolkwaysVideoAnthologyofMusicandDance‐thisstudyusedathree‐passsystemtocodethedataandpreserveindependenceofthecodinggroups.Inthefirstpass,portionsoftheanthologywhereinasinger'sfaceisclearlyvisibleandthesingerisindividuallyaudibleareidentified.Twoindependentgroupsofgraderswerethenrecruited.Thesecondpasshadthefirstgroupofcodersmarkthelowestandhighestpitchessunginthepassagesidentifiedinpassone,butlistenedtotheanthologywithnovideo.Thesecondgroupgradedtherelativeeyebrowheightofthesingeratthetimepointsidentifiedbythefirstgroupwhilewatchingthevideowithnoaudio.Thestudythenexploredthecorrelationbetweenthesedata.

Page 81: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:81A.29Functionalneuroimagingofmusicalemotions:areviewandmeta­analysisThomasKraynak*CaseWesternReserveUniversity,Cleveland,Ohio,USA*=Correspondingauthor,tek11@case.eduInthepasttenyearsresearchershavecarriedoutdozensofneuroimagingstudiesusingmusicalstimulithatinduceemotionalresponses.Curiously,theresultsofthesestudieshaverarelybeenpooledtogetherandcomparedinthefashionofameta‐analysis.Considerablevarianceinexperimentaldesignandpowermaydissuadesuchcombinativeresearch,howeversharedactivationacrossparticularstructuresandsystemsinvaryingparadigmswouldindicatesignificantattributionsinemotionalandmusicalprocessing.Mainquestionsincludetheeffectofcrossmodalstimulusintegrationonemotionperception,theexperimentaleffectofcomputerizedvs.liverecordedmusic,andtheinterplaybetweenemotionalvalenceandtheirrespectiveneuralnetworks.PublishedfMRIandPETstudiesinvolvingmusicandemotionwereacquired,andTalairachcoordinatesofsignificantactivationswerepulled.Studiescombiningmusicandotheremotionalstimuli,suchasfilmandpictures,wereincluded.Becauseofthesmallnumberofcollectedstudies(~30atthispoint,dwarfedbytypicalmeta‐analyses)Iamapplyingamulti‐levelkerneldensityanalysis(MKDA;Wageretal.,2007)inordertocontrolforfalsepositives.Studiesarestillbeingaddedtoanalyses,howeverinitialanalysesshowsignificantbilateralamygdalaractivationinbothpositiveandnegativeemotionalcontrasts.Furthermore,somecrossmodalcombinedconditionsseemtoexhibitmusicalaccompanimentcontinuingtoactivatesensoryintegrationareas(MTG)intheabsenceofvisualstimuli.Thismeta‐analysisofthecurrentneuromusicalliteraturewillproveavaluableresourceforfuturedirectionsinimagingstudies,aswellasfielddiscussionontheimportanceofuniformityinmusicneuroimagingstudies.A.30EffectsofContourChangeonMemoryEncodingforMinuets:AnERPStudyShannonL.Layman(1)*,RamiroR.Lopez(1),W.JayDowling(1)(1)TheUniversityofTexasatDallas,Richardson,TX,USA*=Correspondingauthor,shlayman@utdallas.eduHowdoesoneencodetheauditoryinformationofmusicintomemory?Previousstudieshypothesizedthatfeaturesofmusic,suchascontour,areimportantinmemoryencoding.Ithasalsobeendemonstratedthatthetimeintervalbetweenhearingasampleofmusicandthelistener’sresponsetoitdeterminesthedegreeofintegralformationofmemory.Thepresentstudyexploresmemoryencodingformelodiccontoursovervariablesegmentsoftime.Wepresentedlistenerswithclassicalminuetsinwhichrecollectionofaninitialphrasewastestedaftershort(4s)orlong(12s)delays.Thereweretwotestconditions:nochange;andachangeincontourateitherthe3rdor4thnotefromthestartofthetestphrase.Participantsidentifiedtestitemsasthesameasordifferentfromtheinitialphrases.Responsesdifferedbetweenlongandshortdelays,suggestingthatparticipants,giventime,wereabletoconverttheinitialphrasesintousablememorystores.Additionally,wemeasuredEEGresponsestoexplorehowthebrainrespondswhenacontourchangeoccursinajust‐heardmusicalphrase.Resultssuggestthatinnon‐musicians,whenthetargetisfartherintemporalproximitytothetest,theERPcomponentoccurringinthelatencyrangefrom300to700ms(theN5component)showsagreaternegativityinthecentralfronto‐lateralregionthanwhenthetargetisclosertothetest.Inmusicians,however,theoppositepatternoccurs.Whenthetargetiscloseintemporalproximitytothetest,theN5showsagreaternegativitythanwhenthetargetisfartherfromthetest.Theseresultsshedlightuponthepreviousbehavioralresultsdemonstratingthat,atleastinmusicians,whenthetargetandtestarewithinclosetemporalproximity,thereisapossiblehindrancetomemory.

Page 82: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:82A.31RelationshipbetweenbasicauditoryabilitiesandperformanceontheMBEAJ.DevinMcAuley(1)*,ElizabethWieland(2)(1)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA,(2)DepartmentofPsychology,MichiganStateUniversity,EastLansing,USA*=Correspondingauthor,[email protected]‐specificdisorderthatisunrelatedtogeneralauditoryfunctioning,cognitiveability,orexposuretomusic(Ayotte,Peretz,&Hyde,2003).Impairmentsincongenitalamusiaincludedeficitsinpitchprocessinganddifficultyrecognizingmelodieswithouttheaidoflyrics.Rhythmimpairmentsinamusiaarelessconsistentlyobserved.ThemostwidelyusedmethodtodiagnoseamusiainvolvestheMontrealBatteryofEvaluationofAmusia(MBEA),whichconsistsofsixsubteststhatseparatelyassessmelodicorganization,temporalorganization,andmemory.Inarelatedliteratureonindividualdifferencesinauditoryabilities,previousresearchhasshownthatspeechperceptionabilitiesaresurprisinglyindependentofanumberofbasicauditoryabilitiesthattapintospectralandtemporalprocessing.Thepresentstudyconsideredthequestionofwhetherbasicauditoryabilitiespredictaspectofmusicperception,andmorespecificallyperformanceontheMBEA.Thetestweusedtoassessauditoryprocessingwasthetestofbasicauditorycapabilities(TBAC)(Watsonetal.,1982;Kidd,Watson,&Gygi,2007).TheTBACconsistsofthreesingle‐tonediscriminationtestsvaryingfrequency,duration,andintensity,threetemporalpatternprocessingtests,andtwospeechtests.ParticipantscompletedtheTBACandMBEAondifferentdayswiththeorderofthetwotestscounterbalanced.Overall,performanceontheTBACandMBEAwashighlycorrelated.Regressionanalysesrevealedthatapproximately50%ofthevarianceontheMBEAisaccountedforbysingle‐tonefrequencydiscriminationandtheabilitytodetectthepresenceorabsenceofasingletoneembeddedinthemiddleofa9‐tonesequence.A.32NeuralmimicryduringperceptionofemotionalsongLucyMcGarry(1)*,LisaChan(1),andFrankRusso(1)(1)RyersonUniversity,Toronto,Canada*=Correspondingauthor,lmcgarry@psych.ryerson.caRecentresearchinourlaboratoryusingfacialEMGhasdemonstratedthatindividualsmimicemotionalmovementsduringperceptionofsong(Chan&Russo,inprep).Inthecurrentstudy,wewishedtodeterminewhetherperceptionofanemotionalsonggeneratesenhancedneuralmimicryinadditiontoperipheralmimicry.Themuwave,anelectroencephalographic(EEG)oscillationwithpeaksat8‐12and20Hz,whosesuppressionoccursduringperceptionandexecutionofaction,isthoughttoreflectmirrorneuronsystem(MNS)activity.Themirrorneuronsystemispurportedtobeinvolvedintheperceptionandanalysisofemotionalintention.Wepredictedthatlyricssungemotionallywouldgenerategreatermusuppressionthanthesamelyricssungneutrally.Datawascollectedusingawithin‐subjectsdesign.EEGdatawascollectedwhileparticipantsviewedrandomizedaudiovisualrepresentationsofthesamesingerperformingthesamesonglyricinthreedifferentways:happy,neutral,orsad.Timingwasequatedacrossconditions.Preliminaryresultssuggestthathappyandsadsonglyricsgenerategreatermusuppressionthanneutrallyrics.Ourresultssuggestthatemotionalsongperformanceleadstogreaterneuralsimulationofmovementthanneutralsongperformance:Whenthesamestimuliareperformedinanemotionalasopposedtoneutralway,greatersuppressionofthemuwaveiselicited,suggestiveofgreaterMNSactivity.ItisdifficulttodiscernwhetherthisenhancedMNSactivationcausesperipheralmimicry,orwhetherperipheralmimicryelicitsgreaterMNSactivityasaresultofmoving.Futurestudiesutilizingmuscleparalysistechniquesshouldexaminewhetherneuralmimicryoccursindependentlyofmovement.

Page 83: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:83A.33InvestigatingtheRoleofMusicalExpertiseinPhoneticAnalogiesofGuitarTimbreAudreyMorin(1)*,NathalieGosselin(2),CarolineTraube(3)(1)BRAMS/OICRM,UniversitédeMontréal,Montréal,Canada,(2)AdjunctProfessor,Facultédemusique,ResearchAssociate,BRAMS/OICRM,UniversitédeMontréal,Montréal,Canada,(3)AssociateProfessor,Facultédemusique,BRAMS/CIRMMT/OICRM,UniversitédeMontréal,Montréal,Canada*=Correspondingauthor,audrey.morin.3@umontreal.caExpertguitarplayerscreateagreatvarietyoftimbresbyvaryingthepluckingposition,angleandstrengthonthestring,andthesedifferenttimbrescanbecharacterizedbytheirsimilaritytoparticularvowelsbyguitarists(Traubeetal.,2004).Themaingoalofthisstudyistoinvestigatetheoriginofthesephoneticanalogiesasusedfortimbredescription(bymeansofonomatopoeiaforexample)andtheroleofmusicalexpertiseintheirperception.Tothisaim,wetestedexpertguitarplayers(withanaverageof14yearsofmusicaleducation)andnon‐musicians(nomusicaleducationoutsideregularQuebeccurriculum)onaprimingtask.ThestimuliofthistaskwereproducedbyaprofessionalguitaristwhoplayedtwocontrastingguitarsoundsthatcorrespondedtotwoFrenchvowels([ɛ ̃]asin"vin"and[u]asin"vous"),andthensungthecorrespondingvowels,alwaysonthesamepitch.Withtheserecordedstimuli,weranaprimingexperimentinwhichparticipantsperformedintwoconditions:guitar‐primedorvowel‐primedtrials.Eachtrialconsistedinthevoweltarget[ɛ ̃]or[u]precededbyaprimethatwascongruentorincongruentwiththetarget.Participantswereinstructedtoidentifytheauditoryvoweltargetswhileignoringthefirstsoundofthepair.Preliminaryresultstendtoshowasimilarreactiontimeprofileforguitaristsandfornon‐musicians.Bothgroupswerefastertonamethevowelwhentheprimewascongruentwiththetarget.Moreover,thiscongruencyeffectisobservedonbothguitar‐primedandvoice‐primedtrials.Theseresultstendtowardsthehypothesisofaperceptualsimilaritybetweenguitarsoundsandvowelsinsteadofaneffectofmusicaltrainingand/orlearningbyassociation.Thiscouldbeexplainedbytheacousticalsimilaritiesbetweenparticularguitarsoundsandvowels(similaroverallshapeofspectralenvelope).

A.34HarmonicFunctionfromVoice­Leading:ACorpusStudyIanQuinn(1)*,PanayotisMavromatis(2)(1)YaleUniversity,NewHaven,CT,USA,(2)NewYorkUniversity,NewYork,NY,USA*=Correspondingauthor,ian.quinn@yale.eduWedescribeadatarepresentationforvoiceleadingbetweentwosonoritiesinachoraletexture,andasimilaritymeasureforthesevoiceleadings.ThesetoolsareusedinanempiricalstudyoftherelationshipbetweenvoiceleadingandharmonicfunctioninacorpusofBachchoralesandacorpusofLutheranchoralesfromahundredyearsearlier.Commonvoice‐leadingtypesinthecorporaaresubjectedtoaclusteranalysisthatisreadilyinterpretedintermsofharmonicfunctionalsyntax.Wearethusablenotonlytoreadatheoryofharmonydirectlyoutofacorpus,buttodosowithoutbuildinginapriorinotionsofchordstructure,rootedness,orevenkey.Theclusteranalysisalsoclarifiesimportantsyntacticdifferencesbetweenthepre‐tonal(modal)corpusandtheBach(tonal)corpus.

Page 84: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:84A.35ComposingbySelection:CanNonmusiciansCreateEmotionalMusic?LenaQuinto(1)*,WilliamFordeThompson(1),AlexChilvers(2)(1)DepartmentofPsychology,MacquarieUniversity,Sydney,Australia,(2)SydneyConservatoriumofMusic,UniversityofSydney,Sydney,Australia.*=Correspondingauthor,[email protected],thecommunicationofemotiontypicallyproceedsfromaprocessofemotionalcodingbycomposersandperformerstoemotionaldecodingbylisteners.Itiswellestablishedthatlistenersaresensitivetotheemotionalcuesusedbycomposersandperformers,butbecausetheirroleispassive,itisunclearwhethertheyarecompetentatmanipulatingthesesamecuesinordertocreateanemotionalcomposition.Thepurposeofthisexperimentwastodeterminewhethernonmusiciansarecapableofcommunicatingemotioninmusicbymanipulatingrelevantacousticattributes.Twenty‐threeparticipants(1.52yearsofaveragetraining)werepresentedwithpairsofquasi‐randompitchsequencesthatdifferedinintensity(lowvs.high),articulation(staccatovs.legato),pitchheight(highvs.low),tempo(fastvs.slow)andmodality(majorvs.minor).Theythenchosethesequencethatmoststronglyconveyedoneoftheemotionsofanger,fear,happiness,sadnessandtenderness.Participants’choiceswereretainedineachsuccessivetrial,suchthattheselectedattributeswerecumulative.Onceallofthechoiceshadbeenmadeforagivenemotionalintention,anewrandomtonesequencewasgeneratedthatcontainedalloftheselectedattributes.Thechoicesmadebyparticipantsweresignificantlydifferentacrossemotionalintentions.Thiswasespeciallypronouncedfortheexpressionofhappinessandsadness.Theformerwasconveyedwithahighintensity,fasttempo,staccatoarticulation,highpitchandmajormode,andthelatterwiththeoppositecues.Thechoicesfortheotheremotionsweremixed.Manydecisionsmirrorthosemadebyexpertmusiciansalthoughdiscrepanciesdidarise.Thesefindingsbroadenourunderstandingofmusicalcommunicationbydemonstratingthepresenceandlimitationsofexplicitknowledgeforemotionalcommunicationinnonmusicians.A.36DeconstructingEvolution'sSexualSelectionShowsMusicCouldAriseWithoutBecomingSexDimorphic:MusicisNotaFitnessIndicatorMarkS.Riggle*CausalAspects,LLC,CharlottesvilleVA,USA*=Correspondingauthor,[email protected]:wasmusicselectedforbyevolutionordidmusicemergeasaside‐effectofothertraits?Manyresearchersassumethelatterbecauseofthelackofviablescenariosdescribingevolutionaryselectionformusic.Amongthenon‐viablesexualselectionscenariosismusicworkingasafitnessindicatorcontrollingfemalechoice;thisisbecausemusicabilityisnotsexdimorphic,and,iffunctioningasafitnessindicator,thendimorphicitmustbe.However,althoughsexualselectionusuallyleadstoasexdimorphismoftheselectedtrait,weshowthatundercertainconditions,sexualselectionwillfailtoproduceasexdimorphism.Sexualselectionrapidlydevelopsatraitbecausemalespossessingstrongerexpressionofthetraitgainareproductiveadvantage.Whilethetraitwillinitiallyexpressinbothsexes,thetraitonlybecomesdimorphicifthetraitproducesafitnessreductioninfemales.Thatis,withoutthereductioninfemalefitness,thereisnoselectivepressuretosexlinkthetrait.Thusatraitcanbeunderrapiddevelopment,and,ifitmerelyremainsfitnessneutralforthefemale,thenthetraitwillnotbecomedimorphic.Sincealltraitshaveafitnesscost,thenforthetraittoremainfitnessneutral,itmustoffersomeoffsettingbenefit.Weshowthatifaparticularsensorypleasureforrhythmicsoundsexists,thenforthoseindividualswhocancreaterhythmicsounds,bothsexesgainbenefit.Whilethemalesgainadirectreproductiveadvantage,thefemales'advantagemaylayingroupbondingoffemales.Abondedgroupprovidesprotectiontoheroffspringaganistconspecifics(afemalereproductiveadvantage).Althoughmusicabilitymaythusbesexmonomorphic,othermusicrelatedtraits,suchaspleasureandmotivations,maybedimorphic.Weshowmusicmayhavearisenbysexualselectionwithoutbeingafitnessindicator.

Page 85: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:85A.37MusicalTraining,WorkingMemory,andForeignLanguageLearningMatthewSchulkind(1)*andLauraHyman(2)(1)AmherstCollege,Amherst,MA,USA(2)AmherstCollege,Amherst,MA,USA*=Correspondingauthor,mdschulkind@amherst.eduRecentempiricalworkhassuggestedthatthecognitiveprocessingofmusicandlanguagearecloselyaligned.Thebrainareasinvolvedinprocessingmusicalandlinguisticsyntaxappeartooverlapandmusicaltrainingenhancespitchprocessinginbothmusicalandlinguisticcontexts.Foreignlanguageandmusicaltrainingalsohavebeenshowtofacilitateperformanceonexecutivecontroltasks(e.g.,theSimontask).Thecurrentstudywasdesignedtoexaminewhethermusicaltrainingand/ormusicalaptitudeareassociatedwithimprovedperformanceonaforeignlanguage‐learningtask.Thesubjectsintheexperimentwerecollegestudentswithvaryingamountsofmusicalexperience.Thesubjectsparticipatedintwoexperimentalsessions.Inthefirstsession,thesubjectscompletedtestsofworkingmemoryspanandmusicalaptitude;amusicaltrainingquestionnaireandaquestionnaireassessingexperiencewithforeignlanguagelearning(bothinthehomeandinacademicsettings)werealsocompleted.Inthesecondsession,thesubjectsweretaught40wordsand12commonphrasesinanunfamiliarforeignlanguage(Amharic,spokeninEthiopia).Thelearningphasewasfollowedbyatestphasethatassessedold/newrecognitionforwordsandrecallofdefinitions.Performanceontheforeignlanguagerecallandrecognitiontaskswasdirectlycorrelatedwithmeasuresofmusicaltraining,butnotwithmeasuresofmusicalaptitude.Musicalaptitudescoreswerecorrelatedwithsimpledigitspanandwithmeasuresofforeignlanguageexperienceoutsideofthelaboratory(e.g.,numberoflanguagesstudied;languagesspokeninthehomeduringchildhood).Finally,workingmemoryspanwascorrelatedwithbothyearsofforeignlanguagestudyandsomemeasuresofmusicaltraining.Thesedatasuggestthatbothforeignlanguageandmusicaltrainingmayenhancegeneralworkingmemorycapacityandthatsomeaspectsofmusicalandlanguagebehaviordrawuponacommonpoolofcognitiveresources.A.38SurveyingtheTemporalStructureofSoundsUsedinMusicPerceptionResearchMichaelSchutz(1)*,JonathanVaisberg(1)(1)McMasterInstituteforMusicandtheMind.Hamilton,OntarioCANADA*=Correspondingauthor,[email protected]’stemporalstructureor“amplitudeenvelope”isbothperceptuallysalientandinformative–itallowsalistenertodiscerncrucialinformationabouttheeventproducingthesound.Recentworkhasshownthatdifferencesinenvelopeleadtocategoricallydifferentpatternsofsensoryintegration.Namely,soundswith“percussive”envelopesindicativeofimpactevents(i.e.fastonsetfollowedbyanexponentialdecay–suchasthesoundproducedbyapiano)integratemorereadilywithconcurrentvisualinformationthansoundswiththe“flat”amplitudeenvelopes(i.e.fastonset,flatsustain,fastoffsetsoundssuchasatelephonedialtone;Schutz,2009).Additionally,melodieswithpercussiveenvelopesareeasiertoassociatewithtargetobjects(Schutz&Stefanucci,2010)thantheirflatcounterparts.Giventhesedifferences,wewerecurioustoexplorethedegreetowhicheachisusedinauditoryresearch.Tothisend,weexaminedthetemporalstructureofsoundsusedinarticlespublishedinthejournalMusicPerception(thisprojectisbasedonalargersurveybyTirovolas&LevitinthatwillalsobepresentedatSMPC2011).Ouranalysisindicatesthatoftheempiricalarticlesusingeithersinglesynthesizedtonesorasequenceofsynthesizedtones(over100papers),roughly30%exclusivelyusedflatenvelopeshapes,androughly20%usedatleastonepercussivesound(i.e.pianoorpercussivetones).Curiously,35%ofthearticlesdidnotoffersufficientinformationaboutthetemporalstructuretoallowforclassification.Giventheadmirableattentiontomethodologicaldetaildisplayedinthesearticles,wefoundthisomissionintriguing.Inconjunctionwithourrecentempiricalworkontheperceptionofpercussivevs.flattones,webelievethissuggestsarichareaoffutureperceptualresearchthatwillbeofinteresttothefield.

Page 86: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:86A.39ContextDependentPitchPerceptioninConsonantandDissonantHarmonicIntervalsGeorgeSerorIII*,JeremyGold,W.TrammellNeillUniversityatAlbany,StateUniversityofNewYork,AlbanyNewYorkU.S.A*=Correspondingauthor,gs433931@albany.eduWeconducted2experimentstoexamineiftheperceptionofcomponenttonesinaharmonicintervalisaffectedbytheconsonanceordissonanceoftheinterval.Asecondaimofthisstudywastodetermineiftheeffectofconsonanceordissonancedifferedforuppervs.lowervoices.Eachexperimentusedasingletone(BorC)followedbyaharmonicintervalinwhichthesinglenotewasrepeated(e.g.,CfollowedbyC‐GorC‐F#)oroneinwhichthesinglenotewasnotrepeated(e.g.,CfollowedbyB‐F#,orB‐G).Inexperiment1,thecontexttoneintheinterval(F#orG)wasalwaysabovethetargettone(BorC).Participants’reactiontimeandaccuracyweremeasuredforYes‐Nokey‐pressresponses.Wehypothesizedthatparticipantswouldbeslowerandlessaccuratewhenrespondingtotonesinadissonantinterval(e.g.TT)vs.aconsonantinterval(e.g.P5).Specifically,ourhypothesissuggestsphasemismatchinthedissonantconditionandharmonicreinforcementintheconsonantconditionasthereasonforthiseffect.Forreactiontimeandaccuracy,therewasamaineffectofamaineffectofconsonance,withtonedetectioninconsonantintervalsbeingfasterandmoreaccuratethandissonantintervals.Experiment2usedthesameprocedureexceptthatthecontexttoneintheintervalwasalwaysbelowthetargettone.Toeliminatethepotentialeffectsofvaryingtargetpitchheight,weusedthesametargettonesasexperiment1.Nomaineffectofconsonancewasfound.Theresultsfromexperiment2indicatethattheconsonanceordissonanceproducedbythelowercontexttonedidnotaffectjudgmentsoftheuppertargettone.Weconcludethattheperceptionoflowerbutnotuppertonesinaharmonicintervalisaffectedbytheinterval’sconsonanceordissonance.A.40PrimitiveHierarchicalProcessesandtheStructureofaSingleNoteDavidSmey*CUNYGraduateCenter,NewYork,USA*=Correspondingauthor,[email protected]‐fiveyearsafteritspublicationLerdahlandJackendoff'sGenerativeTheoryofTonalMusicremainsuniqueinpositingastrictlysegmentedandrigorouslyorderedframeworkforhierarchically‐organizedmusicalperception.Suchaframeworkisvaluableinthatitprovidesacomprehensiveworkingmodelthatcansatisfytheinterestsofmusictheoristsandplacemorefocusedinvestigationsinawidercontext.However,asanactualmodelofcognitiveorganizationthetheoryissomewhatproblematic‐‐itisnotgearedtoaccountfortonalperceptionsinrealtimeandseemsinsufficientlygroundedinprocessesofeverydayhearing.MycurrentprojectaimstodismantleandreconfiguretheLerdahlandJackendoffframeworkintoamodelthatismoredynamic,efficient,andauthentic.Ibeginbyconsideringtheperceptionofsinglenotes(and,moregenerally,discretesonicevents),aphenomenonthattheGTTMauthorstookasgiven.Whenonefocusesonindividualsoundsanewsequenceofeventsbecomesclear,aswhatIcall"primitive"event‐hierarchicalprocessesareengagedbeforemeterratherthanafter.Irecharacterizegroupingasmereedge‐detection,assertthattheperceptionofcontinuityisintrinsicallyevent‐hierarchical,anddescribeamodelofmeterthatislessself‐containedandinternallyconsistent.

Page 87: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:87A.41TheIneffabilityofModernArtMusicCeciliaTaher(1)*(1)UniversityofArkansas,Fayetteville,USA*=Correspondingauthor,ctaher@yahoo.esThisposterexplorestherelationshipbetweentextdescriptionandnon‐musicians’perceptionofmodernartmusic,aimingtoilluminateourcomprehensionofthatkindofmusic.Thepracticeofprovidingverbaldescriptionsofthemusictotheaudienceis widespread. Nevertheless, the effects of verbal descriptions and theoretical comprehension of music on the way weexperiencemusicarestillunclear.Contrarytowhatwewouldexpectaccordingtothepracticeofprogramnotes,experimentalstudieshaveshownthat“prefacinga[tonal]excerptwithatextdescriptionreducesenjoymentofthemusic”(Margulis,2010).Thecomplexityoftonalandatonalmusicseemstolieindifferentaspectsofthemusicaldiscourse.Inthepresentexperimentwitharepeated‐measuresdesign,65collegenon‐musicmajorsheard2450‐secondlongexcerptstakenfromcompositionsbyCarter,Berio, Ligeti, andStockhausen, and rated their level of enjoyment along a7‐pointLikert‐like scale.Theparticipantsreadafour‐linetextbeforelisteningtoeachexcerpt.Threekindsoftextswererandomlypresented:(1)structural,technicaltextabout themusic that follows; (2)dramatic,affective textabout themusic;and(3)non‐relatedtext.Theresultssuggestthat the mental processes and emotional responses that underlie non‐musicians’ enjoyment of modern art music are notguidedbythestructuralordramaticinformationthatashortdescriptivetextcanconvey.Somecontemporarypiecesappeartobemoreenjoyablethanothers,independentlyofthepreviousinformationgiventothelistener.Despitethecomplexityofmodernartmusic,increasedknowledgeintheformofshortdescriptionsdoesnotprovetohaveaneffectonenjoyment.A.42MusicandthePhonologicalLoopLindseyThompson(1)*,MargieYankeelov(1)(1)BelmontUniversity,NashvilleTN,USA*=Correspondingauthor,lindseymarie.thompson@gmail.comResearchonthephonologicalloopandmusicprocessingiscurrentlyinconclusive,duebothtoconflictingdataanddifferingdefinitionsof“musician”ortarget“musical”information.Thegoalofthecurrentstudyistohelpunifytheliteratureonmusicalworkingmemorybymodifyingcertaindesignelementsanddefinitions.Iusedpreviousmethodsofmeasuringmusicalandlinguisticworkingmemoryabilitieswithinterference,butmodifiedthemtoaccountformusicalsyntaxandintervallicrelationships.Acrosstwoexperimentswithsimilarinterferencestructures,31musicallyproficientand31musicallynon‐proficientBelmontUniversityundergraduateslistenedtosixpracticetracksand30experimentaltracksthatcontainedfivesecondsofinterference.InExperiment1,interferenceconsistedofwhitenoise,acompletesentence,oramonophonicmelodicphrase.InExperiment2,interferenceconsistedofwhitenoise,alistof3‐syllableEnglishwords,oralistof3‐notemusicalpatternsselectedfromthetaxonomyoftonalpatterns(Gordon,1976).Participantswereinstructedtoremembera3‐syllablewordor3‐notemusicalpatternandtodeterminewhetherornottherememberedword/patternwasthesameordifferentfromasecondword/patternpresentedaftertheinterference.Resultsoftwo‐waybalancedANOVAsyieldedsignificantdifferencesbetweenmusicalparticipantsandnon‐musicalparticipants,aswellasbetweeninterferencetypesformusicalstimuli,implyingapotentialrevisionofthephonologicalloopmodeltoincludeatemporarystoragesubcomponentdevotedtomusicprocessing.

Page 88: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:88A.43TheEffectofTrainingonMelodyRecognitionNareshN.Vempala(1)*,FrankA.Russo(1),LucyMcGarry(1)(1)SMARTLab,DepartmentofPsychology,RyersonUniversity,Toronto,Canada*=Correspondingauthor,[email protected]

WeextendastudybyDallaBellaetal.(2003)thatcomparedmelodyrecognitioninmusiciansandnonmusicians.Theyidentifiedthreelandmarks:familiarityemergencepoint(FEP),isolationpoint(IP)andrecognitionpoint(RP)andshowedthatmusicianstooklongerinreachingIP,buthadanearlierFEPandRP.Usingcohorttheory,DallaBellaetal.interpretedtheseresultsasaconsequenceofalargerstoredcorpusinmusicians’long‐termmemory(LTM).VempalaandMaida(2009)latersimulatedthesefindingscomputationallyusingcorpussizesthatmightbeexpectedinmusiciansvs.nonmusicians.Inthecurrentstudy,weattempttoexperimentallytestthecorpussizehypothesis,byinducingdifferentialcorpussizesintwogroupsofparticipants.Participantswithsimilarlevelsofmusictrainingweredividedintotwogroups:largecorpus(LC)andsmallcorpus(SC).Eightdiatonicmelodieswerecomposedineachoffourdifferentmodes:Dorian,Phrygian,Lydian,MixolydianwithDorianlikelybeingthemostfamiliarmode.TheLCgrouplearnedall32melodieswhiletheSCgrouplearnedonlyeight(twofromeachmode).Bothgroupsweretestedonfamiliarityandrecognitionofthesemelodiesusingagatingparadigm.OurresultsrevealednodifferencebetweengroupswithregardtoFEP,IP,orRP.Basedonthenullfindings,webelievethatwewereunsuccessfulinourattempttoinducedifferentialcorpussizes,likelyduetothenumberofmelodiesalreadyinlisteners’LTMpriortotheexperiment.Nonetheless,animportantnewfindingemergingfromthisworkisthatLCparticipantsdemonstratedconfidenceinrecognition(RP)ofDorianmelodiessignificantlyearlierthanSCparticipants.Thislatterfindingsuggeststhatinadditiontooverallcorpussize,confidencethroughshort‐termacquisitioninafamiliardomainplaysasignificantroleinthetime‐courseofmelodyrecognition.

Page 89: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:89

Titlesandabstractsforpostersession2(Saturday)Arrangedalphabeticallybylastnameoffirstauthor

B.1Doesthechangeofamelody’smeteraffecttonalpatternperception?StefanieAcevedo(1)*,DavidTemperley(2),&PeterQ.Pfordresher(1)(1)UniversityatBuffalo,StateUniversityofNewYork,UAS,(2)EastmanSchoolofMusic,Rochester,NewYork,USA*=Correspondingauthor,[email protected](repeatedmelodicpatterns)andmetricalstructureisthoughttobeacriticalcomponentofmusicperception(cf.Lerdahl&Jackendoff,1983).Ithasbeenanecdotallyobservedthattherecognitionofrepeatedmelodicpatternsisfaciliatetedwhentherepetitionsarealignedwiththemeter(Povel&Essens,1985;Sloboda,1983),butthishasneverbeensystematicallydemonstrated.Wereportanexperimentthatexploredwhethermatchedmetricalandmotivicstructurefacilitatestherecognitionofalterationstopitchpatternsthathavebeenstoredinshort‐termmemory.Eighttonalmelodieswerecomposedwithbinary(four‐note)orternary(three‐note)repeatedpatterns.Eachmelodywasprecededbyaharmonicprogressionthatsuggestedeitherasimplemeter(alignedwiththebinarypatterns)oracompoundmeter(alignedwiththeternarypatterns),andaregularmetronomeclickoccurredthroughoutthemelodythatmaintainedtheimpliedmeter.Melodies,thus,consistedofmotivicstructuresandmetricalstructuresthatwerecrossedfactoriallyandcouldmatchormismatch.Oneachtrial,participantsheardasinglecombinationofmeterandmelodytwice;inhalfthetrials,onepitchinthesecondpresentationcouldbedisplaced.Resultsshoweddifferencesbetweenmusicallytrainedanduntrainedsubjects(meanpercentagecorrect:trainedsubjects=84%,untrainedsubjects:57%).Theresultsforuntrainedsubjectsshowednomatchingeffects.However,trainedsubjectdatashowedamarginalinteractionbetweenpatternandmetereffects:withinthecontextofthesamemotivicpatternstructure,matchingmetricalstructureresultedinincreasedaccuracycomparedtomismatchingmetricalstructure.Trainedsubjectsalsoshowedgreateraccuracyonstimuliwithternarymotivicstructures.Thecurrentresultsshowpossibleinfluencesofhigher‐orderaspectsofpatternstructureontheperceptionoflocalpropertiesofevents(inthiscase,pitchclass),aswellaspossibledifferencesinperceptionofmotivicstructuresofdifferinglengths.B.2TheMelodyofEmotionsMichelBelyk(1)*,StevenBrown(1)(1)DepartmentofPsychology,NeuroscienceandBehaviour,McMasterUniversity,Hamilton,Ontario,Canada*=Correspondingauthor,[email protected],suchassyllabicstressandsentencefocus.Butitconveysemotionalmeaningaswell.Theobjectiveofthepresentstudywastoexaminetheconnectionbetweenmusicandemotionthroughmelody.Mostpreviousstudieslinkingmusicwithemotionhavefocusedoncomparisonsofregister,pitchrange,tempo,andtimbre,totheneglectofmelody.Thecurrentstudyexaminedthemelodicshapesofelicitedemotionalexpressions.Subjectswerepresentedwith neutral text‐scenarios followed by pictures indicating the outcome of the scenario; the scenario‐picture pairs weredesigned toelicit awide rangeof emotions. Subjectswere thencued tovocalizeanappropriateexclamation (e.g., “Yay!” inresponseto joy).Pitchcontourswereextracted fromtherecordedvocalizationsusingPraat,andthenmelodicshapeswereanalyzedusingFunctionalDataAnalysis,acutting‐edgestatisticaltoolforanalyzingthestructureoftime‐varyingdata.Mostexclamations –much likemusical phrases –were characterized by a sudden rise followed by a gradual fall (as in “Yay!”).However,asubsetofemotionshaddistinctivemelodic features: sensualpleasure,disgust,gratitude,appreciation,awe,andterror were expressed with a more symmetrical rising‐and‐falling contour (as in the “Mmmm” of pleasure). Theseobservationssuggestanimportantmelodicparallelismbetweenemotionalandmusicalexpression.

Page 90: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:90B.3Expressioninromanticpianomusic:CriteraforchoiceofscoreeventsforemphasisEricaBisesi(1)*,RichardParncutt(2)(1)CentreforSystematicMusicology,UniversityofGraz,Graz,Austria,(2)CentreforSystematicMusicology,UniversityofGraz,Graz,Austria*=Correspondingauthor,erica.bisesi@uni‐graz.atMotivation.Musicalaccentsmaybeimmanent(grouping,metrical,melodic,harmonic)orperformed(variationsintiming,dynamicsandarticulation).Performersuseperformedaccentsto“bringout”immanentaccents.Howtheydothatdependsonmusical/personalstyle,temporal/culturalcontext,andintendedemotion/meaning.Weareinvestigatingpianists’intuitiveartisticcriteriaforselectingscoreeventsforemphasisandfordecidingwhatkindofemphasistoapply.WeareindependentlylisteningtodiversecommerciallyavailablerecordingsofChopinPreludesop.28andintuitivelymarkingsalientfeaturesofeachpianist’stiming,dynamicsandarticulationonthescores,focusingonstrikingdifferencesamongperformances.Onthebasisofthisdataweareformulatingintuitiveindividualprinciplesforselectingandemphasizingscoreevents.Generally,melodicaccentsaremoreimportantthanisgenerallyassumedinthepsychologicalliterature,whichfocusesfirstonphrasingandsecondonharmonicaccents.Pianistswhoplaymorevirtuosotendtoshapebigphrasearchesbymeanofwidechangesintiminganddynamics,whileemphasisonmelodicandharmonicaccentsisspecificofslowerandmoremeditativeperformances.Regardingthelatter,wefindouttwomaingroupsofperformances:thosewhereaccentsarelocallyemphasizedcontrastingthestabilityofthephrases,andthosewheretheyarepillarssupportingtheagogicsofphrasesandsub‐phrases.Previousresearchonexpressionhassearchedforunderlyingprinciplesbutneglectedqualitativeaccountsoftherichdetailinindividualperformances.Wearebalancingsciences(psychology,computing),humanities(theory/analysis,history)andperformance(intuitiveknowledgeofpianists).OurintuitivelisteningmethodhasadvantagesanddisadvantagesoverquantitativeanalysisofMIDIfiles,sothetwoapproachesshouldbecombined.[ThisresearchissupportedbyLiseMeitnerProjectM1186‐N23“Measuringandmodellingexpressioninpianoperformance”oftheAustrianResearchFund(FWF,FondszurFörderungderwissenschaftlichenForschung).]B.4MelodiesandLyrics:InterferenceDuetoAutomaticActivationJackBirchfield(1)*(1)UniversityofTexasatDallas,DallasTexasUSA*=Correspondingauthor,jack.birchfield@utdallas.eduMelodiesandtheirassociatedlyricsareuniquelyboundinmemory,butcanmerelyhearingamelodyautomaticallyactivateretrievalofitslyrics?Previousresearchintothisquestionhasbeeninconclusive,andIproposethatrelativefamiliarityofthemusicalstimulicanaccountforthedifferentoutcomes.Ineffect,highlyfamiliarsongsaremorelikelytoproduceautomaticactivationoftheassociatedlyricsthanlessfamiliarsongs.Toexplorethispremise,Iconductedtwoexperimentswhichcontrolledbothforthefamiliarityofthemusicalstimuliandforthepresenceorabsenceoflyrics.InExperiment1,participantsheardarandom9‐digitsequencefollowedbyoneoffiveauditorydistractors(highlyfamiliarorlessfamiliarvocalsongspresentedwithouttheirlyrics[HVNLandLVNL],familiarinstrumentalmusic,orwhitenoise),thenwereaskedtorecallthedigitsincorrectorder.Thehypothesiswasthat,ifthelyricsweretriggered,theyshouldinterferewithrehearsalandretentionofthedigitsequence,thusproducingpoorerrecallperformancecomparedtoinstrumentalmusicornoise.However,resultsshowednosignificantdifferencebetweenHVNLsongsandinstrumentalmusic.Aslyricinterferenceisdependentonrehearsalofthedigits,itwaspossiblethatsomeparticipantswerenotrehearsing.Thus,inExperiment2participantswererequiredtosilentlyarticulatethedigitstoensurerehearsal.OverallresultsweresimilartoExperiment1,butitwasfoundthatparticipantswithhighermusicalexpertisedemonstratedsignificantlypoorerrecallwhenhearingHVNLthanfromhearinginstrumentalmusic.Whilethisfindingseemsconsistentwiththeconceptofautomaticactivation,thedifferencewasduetobetterperformancewhilehearinginstrumentalmusic,ratherthanadecreaseinHVNLperformance.Thissuggeststhatthemusicallytrainedarebetteratdisregardingfamiliarinstrumentalmusicthanthosewithlessexpertise,yetareequallyeffectedbyunsunglyrics.

Page 91: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:91B.5MusicalExpertiseandthePlanningofExpressionDuringPerformanceLauraBishop*,FreyaBailes,RogerT.DeanMARCSAuditoryLaboratories,UniversityofWesternSydney,Sydney,Australia*=Correspondingauthor,l.bishop@uws.edu.auMusiciansoftensaythattheabilitytoimagineadesiredsoundisintegraltoexpressiveperformance.Ourpreviousresearchsuggeststhatexpressiveloudnesscanbeimaginedandthatexpertsimagineitmorevividlythannon‐experts.Researchalsosuggeststhatonlineimagerymayguideexpressiveperformancewhensensoryfeedbackisdisrupted.However,boththeeffectsofsensoryfeedbackdisruptionononlineimageryandtherelationshipbetweenonlineimageryabilityandmusicalexpertiseremainunclear.Thisstudyaimstoinvestigatetherelationshipbetweenmusicalexpertiseandtheabilitytoimagineexpressivedynamicsandarticulationduringperformance.Itishypothesizedthatimagerycanoccurconcurrentlywithnormalperformance,thatimageryabilityimproveswithincreasingmusicalexpertiseandthatimageryismostvividwhenauditoryfeedbackisabsentbutmotorfeedbackpresent.Pianistsperformedtwomelodiesexpressivelyfromthescoreunderthreefeedbackconditions:(1)withauditoryfeedback,(2)withoutauditoryfeedbackand(3)withoutauditoryormotorfeedback(imaginedperformance).Dynamicandarticulationmarkings(e.g.crescendo,staccato)wereperiodicallyintroducedintothescoreandpianistsindicatedverballywhetherthemarkingmatchedtheirexpressiveintentionswhilecontinuingtoplaytheirowninterpretation.MIDIpitch,durationandkeyvelocitydatawerecollectedforcomparisonagainstbaselineperformances,givenundernormalfeedbackconditionsusingscoresdevoidofexpressivenotation.Preliminaryanalysessuggestthat,asexpected,expressiveprofilesaremostaccuratelyreplicatedundernormalfeedbackconditions,butthatimageryismostvividintheabsenceofauditoryfeedback.Theimprovementstoonlineimageryabilityexpectedtoco‐occurwithincreasingmusicalexpertise,ifobserved,willsupporttheideathatenhancedimageryabilitiescontributetoexpertmusicians’extraordinarycontroloverexpression.Ifnorelationshipbetweenimageryabilityandmusicalexpertiseisfound,imageryabilitymaynotbeasintegraltoexpertmusicperformanceastraditionallypresumed.B.6Perceptualgrouping:TheinfluenceofauditoryexperienceKeturahBixby(1)*,JoyceMcDonough(2),BetsyMarvin(3)(1)UniversityofRochester,Rochester,NY,(2)UniversityofRochester,Rochester,NY,(3)EastmanSchoolofMusic,Rochester,NY*=Correspondingauthor,kbixby@bcs.rochester.eduWritePerceptualgroupinghasclassicallybeenviewedasafixedpropertyoftheauditorysystem.Thisstudyprovidesadditionalevidencethatitcanalsobeinfluencedbyauditoryexperience.NativeEnglishspeakerstendtogrouptonesofalternatingamplitudeastrochees,andtonesofalternatingdurationasiambs,butnativespeakersofotherlanguagessometimesperceivethetonesequencesdifferently.Musiciansareagroupwhoseintensiveauditorylearningexperienceshavelikelyaffectedtheirperceptualgroupings,butthisiambic‐trochaicgroupingprinciplehasnotyetbeenstudiedinmusicians.ThisexperimentextendsIversen,Patel&Ohgushi(2008),comparingmusiciansandnon‐musiciansonperceptualgroupingtasks.Non‐musicianslisteningtotonesequencesofalternatingdurationstendtogrouptonesasiambs,whilemusiciansseemtogroupthesequencesmoreflexibly,switchingbetweeniambicandtrochaicinterpretations.Thissuggeststhatperceptualgroupingcanbeinfluencedbyevenlateauditoryexperience.

Page 92: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:92B.7SongStyleandtheAcousticVowelSpaceofSingingEvanD.Bradley*UniversityofDelaware,USA*=Correspondingauthor,[email protected],butduringsinging,thearticulationoflyricsinteractswithmusicalconcernsforresonance,expression,andstyle,whichmayhaveconsequencesfortheintelligibilityoftheirlinguisticcontent.Apreviousstudy(Bradley,2010)reportedthattheacousticvowelspaceoffemalesingersundergoessystematicchangesduringsinging,consistentwithknowntechniquessuchaslarynxlowering.Specifically,thefirstandsecondformantfrequenciesofvowelsareloweredforlowandfrontvowels,respectively,resultingincompressionofthevowelspaceandoverlapofvowelcategories.Thepresentstudyinvestigatedhowthesechangesinthevowelspacearemoderatedbysongstyle.Semi‐professionalfemalesingersspokeandsungthelyricsof(i)afolkballad,and(ii)amodernartsongsettingofpoetry,whichhadafastertempoanddynamicmelody,andvowelformantsweremeasured.Differenceswerefoundbetweenthevowelformantsofthetwosongs,including:(a)forlowvowels(/a,ae/),f1waslowerforbothsongscomparedtospeech,butf1loweringfortheartsongwaslessthanfortheballad;(b)forfrontvowels(/i,e,ae/),f2waslowerforbothsongsversusspeech,butf2loweringfortheartsongwaslessthanfortheballad.Thegradienteffectsonlowandfrontvowels(a,b)areconsistentwithpreviouslydescribedchangestothevowelspace,moderatedbysongstyle,andareattributedtocharacteristicsoftheartsong,includingstyle,tempo,andrange).B.8Orff­SchulwerkapproachandflowindicatorsinMusicEducationcontext:ApreliminarystudyinPortugalJoãoCristiano*&R.CunhaInstitutoPolitécnicodeBragança,Portugal*Corresopndingauthor,jcrcunha@hotmail.comThispaperpresentspreliminaryresultsfromanongoingresearchonMusic/MusicPedagogyarea,whichaimstodiscusstherelationbetweentheOrff‐SchulwerkapproachandthedevelopmentofMusicalThougth/MusicalCognition,basedonemotionalandcreativeprocesses.Attemptingtoverify,analyzeandunderstandthisrelationship,theempiricalprocessisbasedontheFlowTheory‐OptimalExperiencedevelopedbyCsikszentmihalyi(1988,1990).Onthebasisofanexperimentalstudy,developedinthecontextofMusicEducationteachingingeneralpublicschools,weverifytheexistenceofdifferent“optimalexperiences/flowstates”boostedbyseveralactivities/teachingmusicstrategies.Withthispurpose,wehaveadoptedandadaptedtheFIMA‐FlowIndicatorsinMusicalActivitydevelopedbyCustodero(1998,1999,2005)andanalyzeddata(audioandvideo)collected“inloco”intwoMusicEducationclasses(5thand6thgradesrespectively),duringthreemonths(firstacademicterm2010/2011).TheanalysisofpreliminarydataobtainedusingFIMA,enablesthevalidationoftheexperimentalstudyaswellastoextractrelevantconclusionsregardingthevariationof“optimalexperiences/flowstates”intheMusicEducationcontext.Inparticular,itisworthmentioningthatduringthisexperimentalstudy,mostofthe"flowindicators"(FIMA)wereobservedinclassesdevelopedthroughouttheOrff‐Schulwerkapproach.TheseobservationscanstrengthentherelationshipthatwebelieveexistsbetweentheOrff‐SchulwerkapproachandthedevelopmentofMusicalThougth/MusicalCognition.Moreover,inherent“optimalexperiences/flowstates”livedintheclassroomseemtobevaluableindicatorsofemotions,thatmaybeabasicfoundationintherelationshipbetweenmusicandemotion(SlobodaandJuslin,2001,2010),asapillaroftheongoinginvestigation.

Page 93: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:93B.9MovementduringPerformance:AHuntforMusicalStructureinPosturalSwayAlexanderP.Demos*,TillFrank,TopherLogan

DepartmentofPsychology,UniversityofConnecticut,Storrs,USA,[email protected]*=Correspondingauthor,alexander.demos@gmail.comThemovementsofmusiciansinperformancehavebeendifficulttoinvestigatebecauseoftheircomplexity.However,byusingtechniquesfromdynamicalsystemstheorycreatedforchaoticsystems(i.e.,phase‐spacereconstruction[PSR]andrecurrencequantificationanalysis[RQA]),posturalswaymovementscanberelatedtothescore,andreviewedforsimilaritieswithinandbetweenperformances.Themovementsofaprofessionaltrombonist(thirdauthor)andanamateurviolinist(firstauthor)performingselectionsfromJ.S.Bach’scellosuites,wererecordedonaNintendowiiBalanceBoard(35Hz).Todeterminethedimensionallyoftheunderlyingsystemandreconstructthephase‐space,themovementsunderwenttwoanalyses(averagemutualinformationindexandfalsenearestneighbor).Theresultingfour‐dimensionalPSRunderwentaRQAtolocatewhereinthemusicrecurrentpatternsofmovementoccurred.Foreachperformance,theRQAanalysisshowedrecurrenceinimportantstructurallocations,forexample,withinandacrossboththebeginningofthemainphrase,whenthepatternrepeatsseveralbarslatter,andattheendofthepiecewhenthemusicalpatternisreversed.RecurrenceinmovementpatternsbetweentheperformancesassessedbyCross‐RQA,showedrecurrencepatternsinlocationsdifferentfromthoseseenwithintheindividualperformances.Therecurrentmovementpatternswithinperformancessuggestthatthemusicalstructureis,inpart,reflectedbythecomplexswayingmovementsoftheperformer.However,thedifferencesbetweenperformancessuggestthatthemovementpatternswereuniquetoeachperformance.Asinotherchaoticsystems,theinitialconditionsoftheperformeraffectedthepatternofmovementacrossthewholeperformance,makingeachonedifferent.Themovementswithinindividualperformances,ontheotherhand,reflectedtheinfluenceofthemusicalstructure.B.10DevelopingaTestofYoungChildren’sRhythmandMetreProcessingSkillsKathleenM.Einarson(1)*,LaurelJ.Trainor(1)(1)McMasterInstituteforMusicandtheMind,McMasterUniversity,Hamilton,Canada*=Correspondingauthor,einarsk@mcmaster.caResearchindicatesthatadultscanperceptuallyextractthebeatfromrhythmicsequencesandcanmoveinsynchronywiththatbeat.Atthesametime,adultperformanceisaffectedbyexperiencewiththeparticularlyhierarchicalmetricalstructureoftheirculture’smusic.WeexaminedevelopmentinWesternkindergartenchildren,asking(1)whethertheyshowaperceptualbiasforcommonWesternmetres,(2)whetherproductiondevelopslaterthanperception,and(3)whetherperceptionandproductionabilitiesarecorrelated.Oneachtrialoftheperceptiontask,5‐year‐oldsarepresentedwitharhythmicsequenceineitherafour‐beat,five‐beat,orsix‐beatmetre,whereeachbeatcontainsoneofthreepatterns:onequarternote,twoeighthnotes,oronequarterrest.Thesequenceisthenrepeated,withsmallalterationsonhalfofthetrials.Inthemetricalteration,thesecondsequencecontainsoneadditionalbeat.Intherhythmicalteration,thenotesononebeatarereplacedbyadifferentpattern(i.e.,oneofquarter,twoeighthsorrest).Inacomputergame,childrenindicatewhethertheanimalproducingthesecondsequenceisabletocopyexactlytheanimalproducingthefirstsequence.Theproductiontasksconsistofrecordingandanalyzingthechildren’sabilityto(1)taptoanauditorybeatand(2)tapbacksimplebeatsequences.Additionally,wemeasuremotorskills,vocabulary,pre‐readingskills,andworkingmemoryinordertoexaminecorrelationsbetweentheseabilitiesandrhythmicperception.Datacollectionisongoing.Weexpectthatfive‐year‐oldchildrenwillbemostsensitivetoalterationsinsequenceswhosemetreiswidespreadintheirnativeculture’smusic,namely,thefourandsix‐beatmetres.Weexpectthatperceptionandproductionskillswillbecorrelatedandthatgeneralmotorskillsandworkingmemorywillaccountforsomeindividualvariationinperceptionandproductionabilities.

Page 94: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:94B.11 Effects of musical training on speech understanding in noiseJeremyFederman(1)*,ToddRicketts(2)(1,2)VanderbiltBillWilkersonCenterforOtolaryngologyandCommunicationDisorders,Nashville,TNUSA*=Correspondingauthor,jeremy.federman@vanderbilt.eduProfessionalmusicianshaveperformedbetter,demonstratedshorterreactiontimes,and/orexhibitedlargercorticalamplituderesponsesthannon‐musiciansontasksoftimbreperception,pitchperceptionandfrequencydiscrimination,contourandintervalprocessing,spatialabilityandvocabularyandverbalsequencing.However,itiscurrentlylessunderstoodwhethertheeffectsofmusicaltraininggeneralizetootherimportant,non‐musicalscenariossuchasunderstandingspeechinnoise.Forthecurrentstudy,primaryaimsincludedinvestigatingeffectsofmusicaltrainingonattention,workingmemory,andauditorystreamsegregationastheyrelatetomusicperceptionandspeechunderstandinginnoise.Specifically,twogroupsdifferentiatedbymusicaltrainingstatus(professionalmusiciansandnon‐musicians)wereassessedusingnon‐speechschema‐basedauditorystreamsegregation(Music‐Achievement‐Test[MAT],interleavedmelodytask),workingmemorycapacity(operationspan),attention(dichoticlisteningtask),andspeech‐based,schema‐basedauditorystreamsegregationtasks(HearinginNoiseTest,ConnectedSpeechTest).Datacollectedtodatewereanalyzedtoassesstheeffectsofmusicaltraining.Resultsshowedthatmusicianssignificantlyoutperformednon‐musiciansonthenon‐speechschema‐basedauditorystreamsegregationtasks.Regardingthespeech‐basedschemabasedtasks,datashowedthatmusiciansunderstandspeechinnoisebetterthannon‐musiciansbothwhenspeechandnoisewerecollocatedandwhenspatialseparationofthespeechandnoiseisprovidedbutsignal‐to‐noiseratioismaintainedatbothears.WhenspatialseparationandchangingSNRlocalizationcueswerepresent,althoughtherewasaneffectofnoiselocation,therewasnoeffectofmusicaltrainingsuggestingthattheeffectofchangingSNRasspeechandnoisearespatiallyseparatedisarobustcuethatmayswampanygroupperformancedifferences.Resultsareimportantbecausetheymayrepresentanewwaytoinvestigatesolutionstothenumberonecomplaintofindividualswithhearinglossevenwhenprovidedwithamplification,namelytheinabilitytounderstandspeechinnoisyenvironments.B.12Differentiatingpeoplebytheirvoices:Infants’perceptionofvoicesfromtheirowncultureandaforeignspeciesRaynaH.Friendly(1)*,DrewRendall(2),LaurelJ.Trainor(1,3)

(1)McMasterUniversity,Hamilton,Canada,(2)UniversityofLethbridge,Lethbridge,Canada,(3)RotmanResearchInstitute,Toronto,Canada*=Correspondingauthor,friendr@mcmaster.caTheabilitytodiscriminateandidentifypeoplebyvoiceisimportantforsocialinteractioninhumans.Itisalsoimportantinmusicalcontexts,wherewecanidentifysingersbytheirvoicequalities.Inthepresentstudy,weareinvestigatingtherolethatexperienceplaysinthedevelopmentofvoicediscrimination.Learningtodiscriminateanumberofmusically‐relevantstimulihasbeenshowntofollowacommonpatternofexperientially‐drivenperceptualnarrowing.Forexample,6‐month‐oldNorthAmerican(NA)infantscandetectmistuningsequallywellinbothnativeWesternscalesandforeignJavanesescales.However,NAadultsaremuchbetteratdetectingthemistuningsinnativethanJavanesescales.Theaimofthecurrentstudyistoinvestigatewhetherasimilarnarrowingpatternoccursfortheprocessingofdifferentvocaltimbres.WetestedEnglish‐speakingadults',6‐month‐olds'and12‐month‐olds'abilitiestodiscriminateeithernative‐species(human),orforeign‐species(primate)vocalizations.Oneachtrial,adultsheardtwovocalizationsproducedbyfemalesofonevoicecategory(English‐speakinghumansorRhesusmonkeys)andindicatedwhethertheywereproducedbythesameindividualorbytwodifferentindividuals.InfantsweretestedonthesamediscriminationsusingaConditionedHeadTurnprocedureinwhichcorrectheadturnstoachangeinindividualwererewardedwithanimatedtoyandlightdisplays.Findingsshowthatsix‐month‐oldsdiscriminatedhumanvoicesandprimatevoicesequallywell.Incontrast,12‐month‐oldsandadultsdiscriminatedbetweenhumanvocalizationsmoreeasilythanprimatevocalizations.Resultssuggestthattheabilitytodiscriminateindividualsbyvoicebecomesspecializedforthevocaltimbresinone'senvironmentbetween6to12monthsofage.

Page 95: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:95B.13Signsofinfants'participatory­andmusicalbehaviorduringinfant­parentmusicclassesHelgaRutGudmundsdottir*UniversityofIceland,Reykjavík,Iceland*=Correspondingauthor,helgarut@hi.isInfantsarediscriminativelistenersandlearnersofmusic.Remarkablyearlyinlife,infantsprocessandinternalizethetonal,melodicandrhythmicinformationpresentedinmusic.Musicalbehaviorininfantshasbeenreportedinhomeandnurserysettings,interactingwithparentsandpeers.Attentionhastoalesserextentbeendirectedtowardsinfants’musicalorparticipatorybehaviorinparent‐infantmusicclasses.Musicclassesforinfantsandparentstogetherhavebecomeawidespreadpracticeinmanycountries.Thephenomenonofparent‐infantmusiccoursesisarelativelynewsubjectofstudyandlittleisknownaboutthedirecteffectsofsuchcourses.Therearesomeindicationsthatparent‐infantmusiccoursesmayaffecttheparticipatingparentswell‐being.However,theeffectsofmusiccoursesoninfantsbehaviorandlearningremaintobeexplored.Thepresentstudyinvestigatedthebehaviorof8‐9‐month‐oldinfantsduringtenmusicclassestaughtoveraperiodof5weeks.Eachclasswasvideotapedfromdifferentangles.Thevideorecordingswerecategorizedaccordingtotheongoingactivities.Selectedactivitieswereanalyzedandcodedaccordingtothetypeofbehaviorelicitedbytheinfants.Violationsfromtheroutineofchosenactivitiesweresystematicallyplantedintooneoftheclassestowardstheend.Infantresponsestoviolationswerecomparedtoresponsesduringnormalconditions.Thevalidityofthecategorizationofinfantresponseswastestedwithapanelofindependentjudges.Thestudyshedslightonthetypesofbehaviorfoundin8‐9‐month‐oldinfantsduringparent‐infantmusiccourses.Theaimwastoidentifywhatconstitutesmusicalbehaviorandparticipatorybehaviorbyinfantsinsuchasetting.B.14TheEffectofAmplitudeEnvelopeonanAudio­VisualTemporalOrderJudgmentTaskJanetKim*,MichaelSchutzMcMasterInstituteforMusicandtheMind,McMasterUniversity,Hamilton,Canada*=Correspondingauthor,[email protected]“musicalillusion”demonstratesthatpercussionistsstrategicallyuselongandshortgesturestoaltertheperceiveddurationofnotesperformedonthemarimba,althoughthesegestureshavenoeffectonthenotes’acousticduration(Schutz&Lipscomb,2007).Thisdocumentationofastrongvisualinfluenceonauditoryjudgmentsofeventdurationcontrastswithpreviousresearchdemonstratingthatvisiongenerallyhasminimal(ifany)influenceontemporaltasks,suchasjudgingeventduration.Thisexceptiontopreviouslyobservedpatternsofaudio‐visualintegrationappearstobelargelyafunctionofamplitudeenvelope(Schutz,2009).Thepossibilitythatamplitudeenvelopeplaysacrucialroleinaudio‐visualintegrationraisesseveraltheoreticalissues.Therefore,inordertoexplorethisclaim,wedesignedatemporalorderjudgment(TOJ)taskofferinganobjectivemeasureoftheroleofamplitudeenvelopeinaudio‐visualintegration.Inthisexperiment,participantswerepresentedwithaseriesofvideoclipsconsistingofatonepairedwithanimageofasinglewhitedot(temporalalignmentoftoneonsetanddotonsetvariedacrosstrials)andweresubsequentlyaskedtojudgethetemporalorderofthetwostimuli(dotvs.tone).Twotypesoftoneswereused:“percussive”(exhibitinganexponentialdecay)and“flat”(anabruptoffset;similartosoundsusedinauditorypsychophysicsresearch).Participants(1)foundthetaskmoredifficultwhenthedotprecededthetone,and(2)whenthetoneprecededthedot,thetaskwassignificantlymoredifficultwhenhearingpercussivetonescomparedtoflattones.Theresultssuggestthattheperceptualsystemismoreinclinedtointegratepercussivesoundsratherthanflatsoundswithtemporallyadjacentvisualevents.

Page 96: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:96B.15MotionCaptureStudyofGestural­SonicObjectsMariuszKozak(1)*,KristianNymoen(2),RolfIngeGodøy(3)(1)UniversityofChicago,Chicago,USA,(2)UniversityofOslo,Oslo,Norway,(3)UniversityofOslo,Oslo,Norway*=Correspondingauthor,mkozak@uchicago.eduInthisposterwewillpresenttheresultsoftwoexperimentsinwhichweinvestigatedhowdifferentspectralfeaturesofsoundaffectlisteners’movements.Currenttheoriesarguefortheexistenceofgestural­sonicobjects,whicharesmallperceptualsonicentitiesextractedfromacontinuousstreamofsoundbyourperceptualandcognitivesystems,boundupwithparticularembodiedresponses(Godøy,2006).Thesehavebeenstudiedintasksinvolvingsoundtracing,ordrawingwhatsubjectsperceivedtobethe“shape”ofthesoundonatablet.Inthepresentcontextweextendedthisnotiontofreemovementinthree‐dimensionalspaceandemployedmotioncapturetechnologytoobserveparticipants’gesturesinresponsetosounds.Whereaspreviousworkusedonlysinglesounds,herewedesignedsequencesaccordingtovariablerhythmiccomplexity.Thus,wecameupwithfourdifferentrhythms,rangingfrommetricallysimpletonon‐metricalandwithoutisochronouspulses.Wepairedthesewiththreeperceptualfeaturesofsound(loudness,pitch,andbrightness)andtwokindsofattackenvelopes(smoothandsharp),resultingin24separatestimuli.Weaskedparticipantstomovetheirrightarminsynchronywitheachrhythm,onceusingjerky/mechanicalmotions,andoncewithsmooth/continuousmovements.Ourresultsindicatethatsoundfeatureshadaneffectongesturecontinuityandtheoverallquantityofmotion,butnotontheaccuracyofsynchronyitself.Inthesecondexperimentweonceagainusedmotioncapturetostudyparticipants’gestures,butthistimetoexcerptstakenfrom20thand21stcenturyrepertoire.Thepurposewastoextendourfindingstomoreecologicalsettings,andobservegesturetypesemployedbylistenersincontextsthataretimbrallyandrhythmicallycomplex.Onthebasisofpreviousworkonfeatureextractionweobservedhowsonicandgesturalfeaturescorrelatedwithoneanother.B.16InteractiveComputerSimulationandPerceptualTrainingforUnconventionalEmergentForm­bearingQualitiesinMusicbyLigeti,Carter,andOthersJoshuaB.Mailman*ColumbiaUniversity,NewYork,USA*=Correspondingauthor,jmailman@alumni.uchicago.eduEmbracingthenotionthatmetaphorsinfluencereasoningaboutmusic(Zbikowski2002,Spitzer2004),thisstudyexploresacomputational‐phenomenologicalapproachtoperceptionofmusicalformdrivenbydynamicmetaphors.Specifically,ratherthanstaticmetaphors(structure,architecture,design,boundary,section)instead,dynamiconesareemphasized(flow,process,growth,progression)asmoreappropriateformodelingmusicalforminsomecircumstances.Suchmodelsarecalleddynamicform(ormoreprecisely:temporaldynamicform)andariseinasubstantialvarietyofways,asshownbyMailman(2009,2010).Adoptinganinterdisciplinaryapproach,thispresentationshowssomecomputationalmodelsofqualitiesthatconveysuchdynamicforminunconventionalrepertoire.Sincesuchmodelsarequantitative,itisplausiblethatlistenerswhodonotspontaneouslyattendtothesebydefaultcouldbetrainedtodoso,andthensubsequentlytestedontheirperceptionandcognitionofsuchform‐bearingflux.Usingsimulationalgorithmsdevelopedforthispurpose,thepresentationoffersaMax/MSPcomputersoftwarepatchthatenablesreal‐timeusermanipulationoftheintensityofsuchqualities.Suchhands‐oncontrolisintendedtocultivatesharperperception,cognition,attention,andinterestoflistenersconfrontingunconventionalmusic.Thepresentationalsoofferscomputeranimationsofsometheorizedunconventionalemergentqualities,whichindeedconstitutevesselsofmusicalform.

Page 97: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:97B.17AutomaticImitationofPitchinSpeechbutnotSongJamesMantell*,PeterPfordresher,BrianSchafheimerUniversityatBuffalo,TheStateUniversityofNewYork,Buffalo,NY,USA*=Correspondingauthor,jtm29@buffalo.eduPreviousresearchsuggeststhatpitchmaybeautomaticallyimitatedduringspeechshadowingtasks,yetlittleworkhasexaminedautomaticimitationofpitchacrossspeechandsongcontexts.Thecurrentresearchexaminedtwoquestionsusingashadowing/repetitionparadigm:(1)doindividualsimitatepitchwhentheyarenotexplicitlyinstructedtodosoand(2)doesthemagnitudeofpitchimitationdifferwhenindividualsbegintheirrepetitionsimmediatelyvs.afteradelay?Intwostudies,participantswereinstructedtoclearlyrepeatthewordcontentsof48,3‐5syllable,wordandpitchcontourmatchedspeechandsongtargetsequences.Participantswereinstructedtorepeatthewordcontentsofthetargetsequenceassoonaspossible(Study1,n=11),orafterathreeseconddelay(Study2,n=15).Weassessedabsoluteandrelativepitchaccuracybyquantifyingmeanabsoluteerrorandcalculatingtarget‐productionpitchcorrelations.Wealsocomparedthecurrentdatawithapreviouslycollecteddatasetthatutilizedanintentionalpitchimitationtask(Mantell&Pfordresher,inreview).Resultsclearlyshowedthatintentionalimitationleadstomoreaccuratepitchproductionsthanshadowing.However,thecurrentshadowresultsrevealedgreaterrelativepitchimitationforsentencesthanmelodies,perhapsbecauseparticipantsignoredthemusicalpitchcontours,butnottheprosodicspeechcontours.ComparisonsacrossStudy1andStudy2suggestedthatshadowdelaydoesnotstronglyinfluencepitchproductionforthesewordrepetitiontasks.Finally,althoughgrouplevelanalysesindicatedthatthetaskinstructions(repeatwords;nomentionofpitch)reducedpitchimitation,ananalysisofindividualdifferencesrevealedthatseveralparticipantsalignedtheirpitchproductionswiththetargetsasaccuratelyasaverageperformancefromtheintentionalpitchimitationtask,butonlyforsentences.B.18SequenceContextAffectsMemoryRetrievalinMusicPerformanceBrianMathias(1)*,MaxwellF.Anderson(1),CarolinePalmer(1),PeterQ.Pfordresher(2)(1)McGillUniversity,Montreal,Canada(2)UniversityatBuffalo,Buffalo,USA*=Correspondingauthor,brian.mathias@mail.mcgill.caSomemodelsofmemoryretrievalduringsequenceproductionproposethatsequenceelementsarepreparedpriortoproductioninanincrementalfashion:Performers’accesstosequenceeventsinshort‐termmemoryisconstrainedtoasubsetofevents.Serialorderingerrors,inwhichcorrectsequenceeventsarerecalledinincorrectpositions,havebeencitedasevidenceforsimultaneouslyaccessibleelementsandincrementalityofplanninginproductiontasks.Wetestpredictionsfromaformalmodelofplanningforeffectsofsequentialcontextonmemoryretrieval.Twenty‐sixskilledpianistspracticednovelshortmusicalexcerptsthatwereembeddedinsmallandlargemelodiccontextsuntiltheyachievedanerror‐freeperformance,andsubsequentlyperformedthesequencesatfastandmoderatetempi,chosentoeliciterrors.Pitcherrorratesdecreasedacrosspractice,demonstratinglearning,andincreasedwithproductionrate,consistentwithspeed‐accuracytradeoffs.Aninteractionbetweencontextandtempowasobserved:Longcontextsenhancedtheeffectoftempoonerrorratesintheexcerpt,relativetoshortcontexts.Serial‐orderingerrorstendedtoarisefromgreaterdistancesinexcerptsplacedinlongercontextsthaninshortercontexs.Fitsofthedatawithacontextualmodelofsequenceplanning(Palmer&Pfordresher,2003)showedthaterrorstendedtoarisefrommetricallysimilareventsmoreofteninlongcontextsthaninshortcontexts.Thesefindingsprovideevidencethatlongercontextsfacilitatetheplanningofeventsthatareenvelopedbythecontext,bystrengtheningassociationsamongproximalandmetricallysimilarsequentialelements.

Page 98: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:98B.19Developingawindowoninfants’structureextractionJenniferK.Mendoza(1)*,LouAnnGerken(2),DareBaldwin(3)(1)UniversityofOregon,Eugene,OR,USA(2)UniversityofArizona,Tucson,AZ,USA(3)UniversityofOregon,Eugene,OR,USA*=Correspondingauthor,jmendoz4@uoregon.eduYounginfantsdetectabstractrulesindomainsasdiverseaslanguage,visualdisplays,andmusic,expeditingtheiracquisitionofsophisticatedknowledgesystems.Recently,Gerken,Balcomb,&Minton(inpress)discoveredthat17‐month‐oldsdiscriminatelinguisticstimulicontainingalearnableabstractruleversusthoselackingalearnablerule:wheninputinstantiatedalearnablerule,infantslistenedlongerandlisteningtimesdisplayedadetectablepeak.Wearecuriouswhethertheselistening‐time(L‐time)effectswillemergeininfants’musicprocessing,andwhetheritispossibletouseL‐timetodiagnosethespecificpointduringlearningwheninfantsdiscoveranabstractpattern.Inafirststudy,infants(6‐8months)hearstringsofthesamesetof3pure‐tonesequenceseithercontaininganabstractrule(ABB)orlackingarule.Ifinfantslistenlongeranddisplayalisteningpeakonlywhentheabstractpatternispresent,thiswouldprovideinitialconfirmationthatL‐timeindexesinfants’rule‐detection.Asecondstudyaimstodeterminewhetheralistening‐timepeakrepresentsthespecificpointatwhichinfantsdiscoverabstractstructureduringtheirongoingexposuretothestimulusset.Wemanipulatethefrequencyrangeofthestimuli,affectinginfants’abilitytodistinguishbetweentones,whilekeepingtheabstractpatternacrosstonesconstant.Abstractrulesshouldbemorechallengingtodiscoverwhentonesaredifficulttodiscriminate.Therefore,infantsshouldshowalaterpeakinlisteningtimeforlow‐versushigh‐frequencysequences.Inathirdstudy,infantshearhigh‐versuslow‐frequencystimuliinwhichnoabstractstructureispresent.IfL‐timepeaksarespecificallyrelatedtoabstractpatterndiscovery,stimulusdifferenceswithoutimplicationsforpatternsshouldnotaffectthem.Together,thesestudieshavethepotentialtovalidateamethodologyprovidinganaltogethernewwindowoninfants’rulelearningasitunfoldsduringrealtime,capturinginfants’onlinerule‐discoveryforalltosee.B.20TheEffectofVisualStimulionMusicPerceptionJordanMoore(1)*,ChristopherBartlette(2)(1)DallasBaptistUniversity,Dallas,TX,USA,(2)BaylorUniversity,Waco,TX,USA*=Correspondingauthor,moore.jordan@me.comWeinvestigatedwhetheravisualstimulusaffectstheperceptionofamusicalexcerpt'svalence.Thehypothesiswasthatthevisualstimuluswouldaffecttheperceptionofthemusic,suchthattheperceivedvalenceofthemusicwouldshifttowardsthevisual.Onehundredfiftyundergraduatestudentsobservedeightstimuliconsistingof1)fourvisual‐musicpairsand2)fourmusic‐alonecontrols.Thevisualandmusicalclipswerecategorizedbyvalence(positive/negative)andarousal(high/low);theseweredeterminedthroughapre‐testinvolving12musicallytrainedgraduatestudents,andconfirmedthroughthecontrolconditioninthemainstudy.Participantsweredividedintoeightgroups,andeachgroupobservedadifferentorderingofthestimuli.Followingthepresentationofeachstimulus,participantsselectedanadjectivefromagroupthatbestrepresentedtheirperceivedemotionofthemusicandalsocompletedfour7‐pointLikert‐typescalesdesignedtoassesscognitiveperceptionsofthemusicalstructure.Inallconditionswherethevalenceofthevisualandmusicagreed,themeanvalenceratingsforthemusicdidnotsignificantlydifferfromthenon‐visualcontrolrating.Whenthevalencesofthevisualandmusicdisagreed,theresultwasasignificant,symmetricaltrendthathighlightstheeffectthatarousalhasontheperceptionofvalence.Low‐arousalmusicalstimuliwerestronglyaffectedbyvisualconditions(p<.0001),withmeanvalenceratingsshiftedtowardsthevisual.High‐arousalmusicalstimuliwerealsoaffectedbyvisualconditions(p<.05);however,theonlysignificantshiftinvalenceratingsoccurredwithhigh‐arousal,opposite‐valencevisuals.

Page 99: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:99B.21AnExperimentonMusicTempoChangeinDupleandTripleMeterYueOuyang(1,2)*(1)BeijingTechnologyandBusinessUniversity,Beijing,China,(2)CommunicationUniversityofChina,Beijing,China*=Correspondingauthor,[email protected],researchershavefoundaninterestingphenomenonlikeithappenedintimedurationcalledTimeOrderError.Whenthetimeintervalislessthan500ms,peoplewillestimateacceleratedtempochangesmoreaccurately,whenthetimeintervalismorethan700ms,peoplewillestimateslowdowntempochangesmoreaccurately.Whenthetimeintervalisbetween500msand700ms,variousinvestigationsindicateddifferentresults.Thisworkwasaimedtoinvestigatewhetherpeoplehaveperceptualbiaswhenthetimeintervalis600ms,andalsoattemptedtoexploretheimpactoftempochangesbycertainfactorssuchaschangevariety,changedirection,metertypeaswellasdifferentmusiclearningexperiences.Thisexperimentchosetwodrumbeatsasdownbeatandupbeat,toperformdupleandtriplemetersequencesrespectively.Ineachtrailtherewerethreestandardbarsbeforethevariablebeat,thevariablebeatwillputaheadordelayfor15%,10%,8%,5%,2%orarriveontime.Participantsweredividedintotwogroupsaccordingtowhetherheorshehadmorethan10yearslearningofmusic.Thetwogroupswereaskedtolistentoeachtrailcarefullyandmakethechoicewhetherthelastbeatcameearly,delayed,orontime.Theexperimentresultshaveshownthatpeoplearemoresensitivewhentheyfeelthebeataheadthanbehind,andthissuperiorityshowedespeciallyobviousintriplemeter.Furthermore,musicstudentsperformedbetterthannon‐musicstudentsinbothaccuracyandreactiontime,especiallyobviousintriplemeter.Itcanbeinferredthatprofessionalmusictrainingexperiencesmayhelponimprovingpeople’sabilityofjudgingtempochanges,andthiseffectactsmoreobviouslyintriplemeter.

B.22Listener­definedRhythmicTimingDeviationsinDrumSetPatternsBrandonPaul(1,2)*,YuriBroze(2),JoePlazak(2)(1)DepartmentofSpeechandHearingScience,TheOhioStateUniversity,ColumbusOHUSA(2)SchoolofMusic,TheOhioStateUniversity,ColumbusOHUSA*=Correspondingauthor,[email protected](RTDs)aretimingdisplacementsofperformedrhythmsinrelationtoapresumptivedownbeatortactus.DrummersarethoughttoemployRTDsinordertocreateasenseof“feel”or“groove”incertainmusicalstyles.Weaimtofurtherdescribelistenerpreferencesbyemployinga‘methodofadjustment’paradigminwhichparticipantsactivelymanipulatesnaredrumRTDstotheirliking.Stimuliweresynthesizedfromdigitallyrecordeddrumsamplesaccordingtonotatedpatternsrandomlychosenfromadrumsetmethodsbook.Eachpatternconsistedofafour‐beatmeasurecontinuouslyloopedwithaninter‐beatinterval600ms(100bpm).Eachof10differentstimuliappearedthreetimesinrandomizedorderforatotalof30trials.31participantswereaskedtoadjustthesnaredrumonsettimingtotheirpreference.Snaredrumonsetsoccurredonlyonmetricbeats2and4andcouldnotbeindependentlyadjusted.Post‐experimentinterviewscollectedinformationonoveralltimingpreferences,strategies,andadjustmentconfidence.Resultsindicatethatthemajorityofparticipantspreferredtimingsthatwereassynchronousaspossible.Themeandistancefromthesynchronouspointwas1msaheadofthebeat(SD=14ms).Frequencydistributionofrawtimingpreferenceswasskewedtowardtimingsaheadofthebeat(‐.165)andwasleptokurtic(.521).Posthocanalysisrevealednosignificantdifferencebetweensubjectsgroupedbytimingstrategy,preference,orconfidence(p>.05).Intra‐andinter‐raterreliabilitycoefficientsoftimingpreferencestodrumsetpatternswerefoundtobelow.Wesuggestthattimingpreferencesarestableacrossemployedtimingstrategiesandrhythmiccontentincommondrumsetpatterns.

Page 100: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:100B.23TheEffectsofAlteredAuditoryFeedbackonSpeechandMusicProduction.TimA.Pruitt(1)*,PeterPfordresher(1),(1)UniversityatBuffalo,TheStateUniversityofNewYork,BuffaloNewYork,UnitedStatesofAmerica*=Correspondingauthor,[email protected] productions of musical sequences depend on the match between planned actions and auditory feedback. Past research has revealed distinct effects of different altered auditory feedback (AAF) manipulations. When AAF leads to asynchronies between perception and action, timing of production is disrupted but accuracy of sequencing is not. Conversely, AAF manipulations of contents (pitch) disrupt sequencing but not timing. Previous research by Pfordresher and Mantell (submitted) has demonstrated that such distinct disruptive effects of AAF in manual production (keyboard) are qualitatively similar in vocal production (singing) of musical sequences. The current research further examines whether similar effects are found for the production of speech, for which syllables rather than pitches constitute event categories. On different trials, participants either sung melodies or spoke sequences of nonsense syllables at a prescribed production rate of 600 millisecond inter onset intervals (IOI) while experiencing AAF manipulations of feedback synchrony or feedback contents (alterations of pitch or of syllabic content). We constructed novel speech sequences that were structurally isomorphic to previously used melodies, by matching each pitch class in a melody to a unique consonant-vowel (CV) nonsense word.Preliminary results suggest that the dissociation in sequencing and timing generalize across both the speech and musical production domains. The trend in mean IOIs across feedback conditions demonstrate the largest slowing of production during the asynchronous feedback condition. Likewise, the expected trend emerged with respect to error rates, as participants were least accurate under the content shifted feedback condition. These qualitatively similar effects suggest that action and perception associations are guided by abstract representations that may be similarly organized across music and language domains. B.24DoesNoteSpacingPlayAnyRoleinMusicReading?BrunoH.Repp(1)*,KeturahBixby(2),EvanZhao(3)(1)HaskinsLaboratories,NewHaven,CT,(2)UniversityofRochester,NY,(3)YaleUniversity,NewHaven,CT*=Correspondingauthor,[email protected] standard notation, long notes occupy more space than do short notes. Violations of this rule look odd and may causeperformanceerrors.Musicallyilliteratepersonsbelievenotespacingconveystempo(Tanetal.,2009).Cannotespacingaffectmusical behavior? In Experiment 1, we investigatedwhether global note spacing influences tempo choice. Skilled pianistswereaskedtosight‐read20unfamiliarexcerpts(with tempoandexpressionmarksdeleted)and findthemostappropriatetempoforeach.Theexcerptswereprintedwitheitherwideornarrownotespacing,inacounterbalanceddesign.Thepianiststendedtoplayslowerwhenthenotationwaswidelyspaced,butthiseffectwasonlymarginallysignificant.InExperiment2,we testedwhethera local change innote spacing can influenceperceptual judgments.Musicians judgedwhether a rhythmtheyheardmatchednotationtheysaw.Thenotationeitherdidordidnotcontainaleftwardshiftednote,andtheprecedingnoteintherhythmeitherwasorwasnotshortened.Ourhypothesiswasthatthemismatchduetotheshortenednotemightbehardertodetect inthepresenceofashiftednote.However,nosignificanteffectemerged. InExperiment3,musiciansagainjudged whether an auditory rhythm matched notation but were required to base their judgment on note spacing whileignoringnotesymbols.Symbolswereeithercongruentwithspacing,incongruent(reversedinorder),orabsent(stemsonly).Therhythmseitherdidordidnotcontainareversedpairofnotes.Participantswereabletousenotespacingwhenrequiredtodosobutunableto ignorenotesymbolsentirely:Bothmatchandmismatchjudgmentsweremoreaccuratewithcongruentthanwith incongruent symbols. In conclusion,while incidental effects ofnote spacing inmusical tasks remain elusive, it isclearthatthetemporalinformationconveyedbynotespacingisreadilyaccessible.

Page 101: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:101B.25BayesianmodellingoftimeintervalperceptionKen‐ichiSawai(1)*,YoshiyukiSato(2),KazuyukiAihara(3,1)(1)GraduateSchoolofInformationScienceandTechnology,UniversityofTokyo,Tokyo,Japan,(2)GraduateSchoolofInformationSystems,UniversityofElectro­Communications,Tokyo,Japan,(3)InstituteofIndustrialScience,UniversityofTokyo,Tokyo,Japan*=Correspondingauthor,[email protected]‐tokyo.ac.jpPerceptionofthetimeintervalbetweensoundsisimportanttomusicperception.Especially,hearingthreesuccessivesoundswithshorttimeintervalsisoneofthemostfundamentalsituationsoftemporalpatternperception.Asforhearingthreerapidsounds,ithasbeenknownthatourbrainsometimesmisestimatesthelatterintervaldependingontherelativelengthofthetwointervals.Concretely,underestimationofthesecondintervaloccursifthefirstintervalisalittleshorterormuchlongerthanthesecond,andoverestimationofthesecondoccursifthefirstisalittlelongerormuchshorter.However,anymodelhasnotcomprehensivelysucceededtoexplainthemisperceptionofauditorytimeintervals.WeproposeamodelofauditorytemporalpatternperceptionusingtheBayesianinferenceframework.OurBayesianmodelassumesthatourneuralsystemcannotobservetruetimeintervals,butonlyintervalsincludingnoise.Bayesianinferenceenablestheobservertoeffectivelyinferthetruetimeintervalsbycombininganobservationwithapriorknowledge.Weformulatethepriorknowledgefortemporalpatternsasfollows,assumingthattheobserversolvesasourceidentificationproblem.First,ourbraininfersfromthethreesuccessivesoundswhetherornoteachpairoftwoneighboringsoundscomesfromthesamesource.Next,ourbrainassumesthatthesoundsfromthesamesourcearetemporallycloseandisochronous.Then,wecombinethepriorknowledgeintoonepriordistribution.Weconductedanumericalsimulationandshowedthatourmodelcanqualitativelyreplicatethemisperceptionofauditorytimeintervals.ThisresultsuggeststhatourbrainmakesaBayesianinferencetoestimatetimeintervals,andthatsourceidentificationplaysanimportantrolefortheinference.B.26LinguisticInfluencesonRhythmicPreferenceintheMusicofBartokAndrewSnow,HeatherChan*EastmanSchoolofMusic,Rochester,NewYork,USA*=Correspondingauthor,heather.y.chan@gmail.comThisstudyexaminestherelationshipofrhythmsbetweenHungarianfolksongsandcompositionsbyBelaBartok.BartokmadeextensivetoursofHungaryandsurroundingregionsduringwhichhecopiouslytranscribedthefolksongsheencountered.HispurposewastocollectthesemelodiesintoabankthathecoulddrawupontowriteinamoreauthenticHungarianstyle.Themotivationofthisstudyistheobservationthatfolkmelodiesarelikelytobeintentionallybasedonrhythmsexistingintheassociatedlanguage,sincetheywerelikelyoriginallysungwithtexts,whiletheabsenceofdirectquotationmayremovethatrhythmicbiasfromotherBartokcompositions.Byusingthenormalizedpairwisevariabilityindex(nPVI)previouslyusedbyAniPatel(2006),wehavequantitativelycomparedrhythmintheprimarythemesoffolksongsandBartok'soriginalcompositionsandfoundthatnostatisticallysignificantdifferenceexistsbetweenthetwogroups.PerhapsthiscouldbetakenasevidencethatBartoksuccessfullyintegratedaHungarian“sound”eveninworksthatdonotuseoriginalfolkmelodies.Ontheotherhand,thelargevariabilityofnPVIamongthethemesinbothcategoriessuggeststhatnPVImaynotbecorrelatedwiththenativelanguageofcomposersandmaybetoounstableforthisanalyticalapplication,whichcouldhaveimplicationsforotherstudiesusingnPVItostudyrhythm.OurrepertoireincludestheRomanianFolkDances,theViolinRhapsodies,theForty‐FourViolinDuos,theStringQuartets,ConcertoforOrchestra,andConcertoforViola.ThemeswereextractedwiththeaidofVeraLampert'ssourcecatalogoffolkmusicinBartok'scompositions(2008).

Page 102: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:102B.27InfantsprefersingersoffamiliarsongsGayeSoleyandElizabethSpelkeDepartmentofPsychology,HarvardUniversity,Cambridge,Massachusetts,USA*=Correspondingauthor,[email protected],includingfaces,languageandmusic[1‐3].Nativelanguagepreferencesleadtopreferencesforspeakersofthatlanguage[4],suggestingthatthesepreferencesmayinfluenceinfants’socialrelationships.Giventhatafterexposureyounginfantscanrecognizefamiliarmusic[5‐6],weinvestigatedtheroleofsongfamiliarityandmelodicstructureinguiding5‐month‐olds’visualpreferencesforsingers,usingmethodsthatpreviouslyrevealedpreferencesfornative‐languagespeakers.Forty‐eightinfantssawvideosoftwowomen,appearinginsequenceandsingingoneofthreesongswiththesamelyricsandrhythm.Onesongwasfamiliartoallchildren(accordingtoparentalreport);theothertwowereunfamiliar,andeithertonaloratonal.Infantsfirstsawasilentbaseline,wherethetwowomenappearedside‐by‐side,smilingtotheinfant.Then,theysawsixfamiliarizationtrials,wherethewomenappearedinalternationandsangfor6trialsthesongversioncorrespondingtothecondition;thiswasfollowedbyasilenttesttrialidenticaltothebaseline.Thelateralpositionsofthewomen,theorderofthepresentationandpairingsofwomentomelodieswerecounterbalancedacrossinfants.Wecomparedthepercentagesoflookingtimetosingersoffamiliarsongsduringbaselineandtest,respectively,usingtwo‐tailed,pairedt‐tests.Infantspreferredsingersofthefamiliarsongtosingersofbothunfamiliarsongs(atonalunfamiliarsong:p<.01;tonalunfamiliarsong:p=.06).Incontrast,infantsdidnotprefersingersofthetonalunfamiliarsongtosingersoftheatonalone(p>.5).Theseresultssuggestthatmusicmodulates5month‐oldinfants’visualpreferencesforthesingers,andthatfamiliaritywithspecificsongsratherthanmusicalconventionsdrivethesepreferencesinearlyinfancy.Thesefindingsindicatethatmusic‐basedsocialpreferencesobservedlaterinlaterlifemightoriginateininfancy.B.28Learningtosinganewsong:EffectsofnativeEnglishorChineselanguageonlearninganunfamiliartonalmelodyhavingEnglishorChineselyricsLeahC.Stevenson,Bing‐YiPan,JonathanLane,&AnnabelJ.Cohen*AIRSSSHRCMCRI,DepartmentofPsychology,UniversityofPrinceEdwardIsland,Charlottetown,PECanadaC1A4P3*=Correspondingauthor,Acohen@upei.caWhatistheroleoflanguagefamiliarityinlearningtosinganewsong?Inapreviousstudy,theAIRSShortBatteryofTestsofSingingSkillswasadministeredtonativeChineseandnativeEnglish‐speakersattendingaCanadianUniversity[McIver,A.J.,Lamarche,A.M‐J.,&Cohen,A.J.,“Non‐nativeacquisitionoflyricsandmelodyofanunfamiliarsong,”2010CSBBCSAnnualMeeting,abstract,Can.J.ofExpt.Psychology,64,296‐297(2010)].Effectsofnativelanguagearoseinthetestcomponententailinglearninganewsong.Thestructureofthe23‐notetonalmelodyofthesongparalleledthephraseandsyllabicstructureofthelyrics.LyricswereinEnglish.NativeEnglishspeakerssangbothmelodyandlyricsmoreaccuratelythannativeChinese.Thepresentnewstudyexaminesperformanceofthetwolanguagegroupsasbefore,but,inthetaskrequiringlearningthenewsong,halfoftheparticipantsofeachlanguagegroupreceivesChineselyricsinsteadofEnglish.Theaimistoshowthatthedeficittomelodyacquisitionarisingfromnon‐nativelyrics,asfoundfornativeChinesespeakers,generalizestonativeEnglishspeakers.However,becausethenativeEnglishspeakersknowlessofChineselanguagethannativeChinesestudentsknowofEnglish,alargerdeficitfornativeEnglishspeakersisexpected.Thepatternoftheresultsmayrevealtheseparateinfluencesofgeneralcognitiveloadversusdemandonasyntacticintegrativeresourcesharedbymelodyandlyrics(SSIRH)[Patel,A.,“Music,languageandthebrain”.NY:Oxford(2008)].Thedesignentails32participants,16nativeEnglishspeakingand16Chinese.Othercomponentsofthetestbattery(e.g.,reproducingmusicalelements,singingafamiliarsong,creatingbothsongandstory,completingasong)provideadditionalcontext.Theresultsmayalsoapplytoteachingsongsinmulticulturalsettings.[supportedbySSHRC]

Page 103: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:103B.29ExploringReal­timeAdjustmentstoChangesinAcousticConditionsinArtisticPianoPerformanceVictoriaTzotzkova(1)*(1)ComputerMusicCenter,ColumbiaUniversity,USA*=Correspondingauthor,[email protected]

Inanessaytitled“Copingwithpianos”,AlfredBrendelassuresusthat“anyonewhohasevertraveledwithapianoknowsthatthesameinstrumentnotonlysoundsdifferentindifferenthalls,itevenseemstofeeldifferentinitsmechanism…”Evenmorestrikingly,thisdifferenceinthefeeloftheinstrumentmanifestsitselfinthesamespaceandonthesamedaybetweentheafternoonrehearsalandtheeveningperformance.OnBrendel’saccount,theacousticdifferencethepresenceofanaudiencemakesfiguresintotheperformanceexperienceofthepianistinsignificantways,impactingeventheexperienceofanintimatelyfamiliarinstrument.Thepresentresearchfocusesontheroleoflisteninginactsofperformance,aimingtoopentoinvestigationthewaysthatpianistsmayadjusttheiractionsinperformanceinordertoobtainadesiredsortofsoundunderparticularacousticcircumstances.Itfurtheraimstocomplicatetheideaoftimbreinpianoperformance,seekingtomovetowardsaconceptionoftimbreasarangeofpossibilitiesavailabletothepianist.TheprojectfocusesinturnontimbralvariabilityofpianosonoritiesbycomparingtimbralprofilesofcorrespondingchordsinMortonFeldman’sLastPiecesperformedunderidenticalstudioconditions,andoninterviewresponsesofparticipatingperformersaskedtoperformanexcerptfromLastPiecesthreetimesundersubtlyenhancedstudioacousticconditions.Theprojectispartofalargerresearchprogramfocusingontimbral(coloristic)aspectsofclassicalmusicpianoperformance.SuchaspectsofperformanceareconsideredinlinewithAugoyardandTorgue’sdefinitionof“soniceffects”asphenomenawhichincorporateboth“physicalandhumandimensionsofsound”(AugoyardandTorgue2006).Thecoupledapproachofthisprojectaimstocontributetounderstandingthewaysinwhichphysicalandhumandimensionsofsoundinteractinexperiencesofartisticmusicperformance.B.30Theroleofcontinuousmotioninaudio­visualintegrationJonathanVaisberg*,MichaelSchutzMcMasterInstituteforMusicandtheMind,McMasterUniversity,Hamilton,Canada*=Correspondingauthor,vaisbejm@mcmaster.caThepresentstudyexplorestheroleofmotioninaudio‐visual integration,withthegoalofbetterunderstandingthewaysinwhichvisualinformationplaysaroleinthemusicalexperience.Muchresearchonaudiovisualinteractionsreliesontheuseofvisualstimulithatareeitherstatic,orexhibitapparentmotion(ratherthancontinuousmotion).However,whenwatchingamusical performance, we are generally seeing continuous motion. Therefore, we are investigating the degree to whichmovement in visual stimuli affects theways inwhich it altersourperceptionof concurrent auditory information.Previouswork has shown that a single moving dot mimicking an impact gestureis capable of altering the perceived duration ofpercussivesounds(Schutz&Kubovy,2009).Inthecontextofthisparadigm,longergesturesinducedparticipantstoperceivelongertonedurations.Thisstudybuildsonthatworkbydirectlycomparingthevisualinfluenceoftwoclassesofkindsofdots:dynamic(basedonthemovementsofapercussioniststrikinganinstrument)andstatic(singledotsturningonandoffwithoutmoving). Inthisexperiment,participantsareaskedtoindicateperceiveddurationratingsforaseriesofsoundspairedwithlongandshortversionofeachofthesetwoclassesofvisualstimulitoexploretheirrelativeinfluenceontheperceptionoftoneduration.Wearecurrentlycollectingdata,andanticipatethattheresultswillshedlightonthedegreetowhichmotionplaysaroleinaudio‐visualintegration.Webelievetheoutcomeofthisstudywillbeusefulinapplyingthevastliteratureonaudio‐visualintegrationtoquestionsregardingtheroleofvisualinformationinmusicperception.

Page 104: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:104B.31TheEffectofRhythmicDistortiononMelodyRecognitionDavidWeigl(1)*,CatherineGuastavino(1),DanielJ.Levitin(2,3)(1)SchoolofInformationStudies,McGillUniversity,Montréal,Québec,Canada(2)DepartmentofPsychology,McGillUniversity,Montréal,Québec,Canada,(3)CentreforInterdisciplinaryResearchinMusicMediaandTechnology(CIRMMT),McGillUniversity,Montréal,Québec,Canada*=Correspondingauthor,david.weigl@mail.mcgill.caTheroleofrhythmremainsunderexploredinresearchonmelodyrecognition.Thisstudyinvestigatedthecuevalidityofrhythmicinformationformelodyrecognitionbyaskingparticipantstoidentifymelodieswithdistortedrhythmwhileleavingpitchinformationintact.Thestimuliconsistedofexcerptsfrom46melodiesselectedinpreviousresearch:422undergraduatepsychologystudentswereaskedtolistmelodiestheywouldfindimmediatelyrecognizable;melodieswiththehighestinter‐subjectagreementwereselected.Thesetincludednurseryrhymesandpopularsongs.Inthisstudy,50psychologyundergraduateslistenedtoMIDIfilesrepresentingmelodiesin3rhythmicconditions:shuffled,whereallnotedurationswererandomlyreassignedamongthenotesofthemelody;randomized,whereallnoteswereassignedarandomdurationinmiditicks,constrainedbytheshortestandlongestdurationspresentineachmelody;andundistorted,thecontrolcase.Asimilaritymetricbasedonchronotonicdistancewasusedtoquantifythedegreeofrhythmicdistortioninthemanipulatedconditions.Thisresearchiscurrentlyunderway;weanticipatecompletingdatacollectionandanalysiswellbeforetheconferencedate.Wehypothesizeahighlysignificantdropinrecognitionunderrhythmicdistortion;furthermore,weexpectthedropinrecognitiontocorrelatewiththechronotonicdistancebetweenoriginalanddistortedversions.Thisstudycontributestotheongoingdiscussionontheroleofindividualmusicalfacetsinmelodyrecognition.Resultsinformalargerprojectexploringtheeffectsoftemporal,structural,pitch‐relatedandpolyphonicmusicalfacetsandtheirinteractionsontherecognitionoffamiliartunes.Bysystematicallymanipulatingthesefacets,wehopetodiscoverminimalinformationrequirementsformelodyrecognition.B.32Perceptionofentrainmentinapes(panpaniscus)PhilipWingfield(1)*,PatriciaGray(2)(1)UniversityofNorthCarolina­Greensboro,GreensboroNC,USA,(2)UniveristyofNorthCarolina­Greensboro,GreensboroNC,*=Correspondingauthor,e:ptwingfi@uncg.eduWehypothesizethatgiventhatthegeneticevidenceindicatesthatapesandhumansareclosely‐relatedspecies(98.797%ofhumanDNA);andgivenevidenceofsimilarneuralstructuresandpatternsofneuralasymmetryinbothhumansandnon‐humanapesthatsuggesttheorganizationofcorticalareascriticaltotemporalorganizationformusic‐makingandlanguageabilitiesinhumansarephysicallyavailablefortheexpressionofthesebehaviorsinnon‐humanapes;andthatBonobosandhumansclaimacommonancestor(~6millionyearsago)beforetakingseparateevolutionarypaths;andthatbonobossharemanybiologicalandculturalelementswithhumans;andthatbonoboshavecomplexculturalprocesses;andgiventherightcontextinwhichculturesdevelop,therecanbebasicelementsofmusicalitymanifestinbonobocommunication.Thisresearchexploresintegrativeauditoryandcognitivecapacitiesofbonoboapes(Panpaniscus)andprobespossiblerelationshipstomusicevolutioninhumans.LookingatdatacollectedbetweenhighlyenculturatedbonobosandEnglish‐speakinghumans,weanalyzedtimingsofexchanges,rhythmicityofturn‐taking,andrhythmicityofconversationalEnglishwithapeinterjections(‘peeps’)asintegratedsequences.ResearchresultsareinprogressbutsuggestthattemporalcorrelationsofbonobovocalizedinterjectionswithEnglishspeakersarestatisticallysignificant.Theresearchprojectbearsrelevanceto:1)languageandcognitioninapesandhumans;2)auditorycommunicationperceptionviaspeechprosody,non‐speechvocalizations,andmusic;and3)theevolutionoftemporallymoderatedinteractivebehaviors.Thespecificobjectivesoftheresearcharetoanalyzedataofbonobo/humaninteractionsbasedonvocalizations/speechandtodetermineifbonobosmaketemporallycorrelatedvocalizationstoEnglish‐speakinghumans.

Page 105: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:105B.33TransferEffectsintheVocalImitationofSpeechandSongMatthewG.Wisniewski(1)*,JamesT.Mantell(1),PeterQ.Pfordresher(1)(1)UniversityatBuffalo,TheStateUniversityofNewYork,Buffalo,TheUnitedStatesofAmerica*=Correspondingauthor,[email protected],whetheritisspeechorsong,requiresaparticipanttoperceivethesoundandtranslateitintovocalproduction.Thissuggeststhatsimilarmechanismsareusedtoimitatestimulifromthespeechandsongdomains.Thereisevidence,however,forindependentimitationmechanisms.Forinstance,somepeople(amusics)haveprocessingimpairmentsspecifictostimulifromthesongdomain(Peretz&Coltheart,2003).Inthisstudyweaskedthequestion:Howdoespracticeimitatingastimulusfromonedomainimpacttheabilitytoimitateastimulusfromtheother?Inaddition,weinvestigatedhowchangestostimuluscontourandwordinformationaffectedperformanceonnon‐practicedstimuli.Participantspracticedimitatingspeechorsongstimuliandwerethentransferredtostimulithatwerethesameordifferentindomain,contour,ortext.Analysesofperformanceontransfertrialsshowsthattheeffectsofswitchingcontourarebeneficialforimitationaccuracyifdomainisalsoswitched,butnotbeneficialifdomaininthetransfertrialsisthesameaspractice.Resultssuggestthatthemechanismsforimitatingspeechandsongarenotindependentandthatcontourisnotprocessedindependentlyofstimulusdomain.B.34Thesinglevoiceinthechoralvoice:Howthesingersinachoircooperatemusically

SverkerZadigÖrebroUniversity,Örebro,Sweden*=Correspondingauthor,sverker.zadig@telia.comAsachoralconductorandalsoasachoralsingerIhavetheexperienceofformalandinformalleadersinthechoralvoice,andIbecameinterestedtostudywhatreallygoesonbetweenthesingers.Thispaperdescribeswhatcanhappenbetweenthesingersinachoralvoiceandhowtheindividualsinachoirdifferintakinginitiativeandactinginleadingroles.Ihavedonequalitativeinterviewstudieswithconductorsandsingers,andalsorecordingstudiesoftheindividualsinachoralvoice.TheserecordingshavebeenmadeinaSwedishgymnasiumschool.Withcloseupmicrophonesonheadsetsandbymultitrackrecordingsithavebeenpossiblethroughananalyzingprogramtowatchgraphicallyexactlyhoweachindividualsingsandalsotocomparethesingerswitheachother.Therecordingsessionshavebeendoneinfollowingchoirrehearsals,andwithasimultaneousvideorecordingtobeabletoalsotakenoticeoneventualvisualsignsbetweenthesingers.Analyzehavebeendonewithprintoutsofthesamesequenceofthemusicwiththerecordingsofallvoicesinthesamechoralvoice.Itispossibletographicallyviewdifferencesinattacksandintonation,andalsotonoticewhensomeoneisaheadand“pulling”otherstofollow.Thisleadingrolecanbebothpositiveandnegative,aconfidentbutnotsogoodsingercanunfortunatelybringalongothersingerstotakewrongstepsinthemusic.Myvisionistofind,improveanddevelopapositiveleadershipfromgoodsingerstotherestofthechoir,andalsotofindifandhowtheseatinginthechoircanaffectthesinging.

Page 106: Society for Music Perception and Cognition

AUTHORINDEXAcevedo,S. B1Adachi,M. 17,18Aguila,E. A1Aihara,K. B25Albin,A. 65Albrecht,J. 94Almonte,F. 75Ammirante,P. 68Anderson,B. 86Anderson,M. B18Ashley,R. 83,86Athanasopoulos,G. 44Aube,D. A2Aufegger,L. 29Aziz,A. 100Bailes,F. B5Baldwin,D. B19Balkwill,L. 4Bartlette,C. B20Bass,A. A13Battenberg,E. A15Begosh,K. 96Belyk,M. B2Benadon,F. 28Beckett,C. 46Berger,D. A2Bergeson,T. 33Bharucha,J. 105Bhattacharya,J. 14Bigand,E. 50Birchfield,J. B4Bisesi,E. B3Bishop,L. B5Bixby,K. B6,B24Borchert,E. A3Bradley,E. B7Brandon,M. 19Brehm,L. A13Brooks,C. 40Brown,J. 70Brown,R. 97Brown,S. 8,35,36,107,B2Broze,Y. A4,B22Butler,B. 51,A5

Campbell,M. A6,A28Caplin,W. 69Carey,S. 57Carianai,P. 34Chaffin,R. 96Chan,H. B26Chan,L. 5,A32Chanda,M. A7Chen,C. A22Chen,C. A22Chiasson,F. 22Chilvers,A. A35Chordia,P. 65,80,101Chow,I. 35,36Choy,T. A8deClercq,T. 74Cogsdill,E. 55,67Cohen,A. B28Connolly,J. A8Corpuz,V. 5Corrigall,K. 20Cox,K. 91Craton,L. A9Creel,S. 84Crellin,J. A17Cristiano,J. B8Croley,B. A2Cunha,J. B8Curtis,M. 105Davis,M. A10Dean,R. B5Demos,A. 96,B9Devaney,J. 99Dilley,L. 37Domanico,L. A9Dong,Q. A2Donnelly‐Landolt,W.

A9

Dowling,W. A30Duane,B. 86,A11Dumas,R. A12Edelman,L. A13Egermann,H. 92Einarson,K. B10Eitan,Z. 27

Page 107: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:107Farbood,M. 85Federman,J. B11Fedorenko,E. 49Filippa,M. 21Frank,T. B9Franěk,M. A14Friendly,R. B12Frith,S. 44Fujinaga,I. 99Gazit,O. A15Georgopoulos,A. A12Gerken,L. B19Gingras,B. 14,A16Godøy,R. B15Gold,J. A39Good,A. 90Goodchild,M. A17Goodwin,M. A2Gosselin,N. A33Granot,R. 27Gratier,M. 21Grauer,V. 8Grey,P. B32Gross,A. 98Groux,S. 93Guarna,K. A4Gudmundsdottir,H. B13Guastavino,C. B31Halpern,A. 48Handelman,E. 102,A18Hannon,E. 82Hasegawa,R. A19He,Q. A21Hedger,S. 15Hegde,S. 43,105,A20Helm,P. A13Henry,M. 30,54Herholz,S. 48Herrington,L. A21Hoeckner,B. 15Hoeschele,M. 4Horn,K. 79Hou,J. A21Houlihan,K. 60Huang,J. 16Huron,D. 40,79,94

Hyman,L. A37Iversen,J. 19,52Jiang,C. 88Johnson,R. A24Kalender,B. 45Kanwisher,N. 49Katz,M. A13Kendall,R. 24Kim,J. B14Kim,J.C. A27Kim,J.N. A26Kim,S. A25Klyn,N. A6,A28Koreimann,S. 59Kozak,M. B15Kraynak,T. A29Kruger,J. 104Kruger,M. 104Krumhansl,C. 16Ladinig,O. 40Lagarrigue,C. 22Lagerstrom,E. A24Lane,J. A2,B28Large,E. 75Larsen,J 9,12Layman,S. A30Lee,S. 65Lembke,S. 25Leuthold,A. A12Levitin,D. 1,39,41,60,62,

A7,B31Lin,M. 8Lisboa,T. 96Liu,F. 88Liu,Y. 101Livingstone,S. 13Logan,T. B9London,J. 55,67Lopez,R. A30LouAnnGerken,L. Loui,P. 2Luebke,A. 31Mailman,J. B16Manning,F. 53Mantell,J. 107,B17,B33Marcum,C. 87

Page 108: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:108Margulis,E. 77Marie,C. A21Marin,M. 14,81Martens,P. 9,12Marvin,E. 3,B6Mathias,B. B18Matsumoto,A. 87Mavromatis,P. 76,85,A34McAdams,S. 22,23,25,42,54,

69,72,92,A17McAuley,J. 30,A31McDermott,J. 49McDonough,J. B6McGarry,L. 5,10,A32,A43McLean,J. 104Mendoza,J. B19Milko,J. 35Miller,M. 72Mitchell,L. 39,41Mogil,J. A14Moore,J. B20Moran,N. 44Morgan,J. A8Morin,A. A33Morrow,S. 94Moscicki,M. 4Moussard,A. 50Muhlenhaupt,E. A9Narmour,E. 61,64Nave,K. 54Neill,W. A39Newport,E. 3Norman‐Haignere,S.

49

Nusbaum,H. 15Nymoen,K. B15Ouyang,,Y. B21Oxenham,A. A3Paisley,A. 39,41Palmer,C. 13,97,B18Pan,B. B28Panikar,A. 43Parncutt,R. 6,71,B3Patel,A. 19,52Paul,B. A4,B22Paul,G. 26

Pearce,M. 92Peretz,I. 50Perlovsky,L. 7Peterson,N. 33Petružálek,J. A14Pfordresher,P. 107,B1,B17,B18,

B23,B33Plantinga,J. 18Plazak,J. 78,B22Poirier,C. A9Poon,M. 36,106Port,R. 38Poudrier,E. 56Pruitt,T. B23Quam,R. 82Quinn,I. A34Quinto,L. A35Rajarajan,P. 54Ramanujam,B. 43Randall,R. A1Rao,S. A20Rendall,D. B12Repp,B. 56,B24Ricketts,T. B11Riggle,M. 66,A36Rosenthal,M. 82Russo,F. 5,10,47,90,95,

A32,A43Rutkowski,T. A25Rzeszutek,T. 8Sachs,M. 2Saffran,J. 19Saindon,M. 32Samplaski,A. 103Sandstrom,G. 95Sapp,C. 71Sastry,A. 80Sato,Y. B25Savage,P. 8Sawai,K. B25Schachner,A. 57Schafheimer,B. B17Schellenberg,E. 32,45Schlaug,G. 2Schmuckler,M. 58Schramm,J. 31

Page 109: Society for Music Perception and Cognition

SMPC2011Programandabstracts,Page:109Schubert,P. 99Schulkind,M. A37Schutz,M. 26,53,106,A38,

B14,B30Sears,D. 69Seror,G. A39Sharma,V. 1Sigler,A. 102Smey,D. A40Smith,B. 22Snow,A. B26Soley,G. B27Spelke,E. B27Stevenson,L. B28Stewart,L. 81,88Sturdy,C. 4Sullivan,J. 90Sun,S. 101Taher,C. A41Tan,D. 11,63Tardieu,D. 23Temperley,D. 11,73,74,89,B1Temperley,N. 89Thompson,L. A42Thompson,W. 13,68,81,88,A35Tilton,A. 107Tirovolas,A. 62Trainor,L. 20,51,A5,A21,

B10,B12Traube,C. 20,A33Trehub,S. 18,32,45Trejaut,J. 8Trenck,M. 9Tzotzkova,V. B29Upham,F. 42Vaisberg,J. A38,B30Vassilakis,P. 24Vempala,N. 47,A43Verschure,P. 93Vitouch,O. 29,59Vuvan,D. 58Wanderley,M. 13Wang,Y. 8Wedd,A. 30Weigl,D. B31Weishaar,K. 35

Weisman,R. 4Wessel,D. A15Wieland,L. A31Wiggins,G. 92Wild,J. 72,99,A17Wingfield,P. B32Winkler,M. 28Wisniewski,M. B33Wolpow,M. A13Xu,Y. 88Xue,G. A21Yang,Y. 88Yankeelov,M. A42Zadig,S. B34Zamm,A. 2Zatorre,R. 48Zhao,E. B24Zhou,L. 18

Page 110: Society for Music Perception and Cognition

MapofEastmanCampus(includeslocationsfortalksandevents)