924170

Embed Size (px)

Citation preview

  • 8/11/2019 924170

    1/14

    Hindawi Publishing CorporationInternational Journal o Vehicular echnologyVolume , Article ID ,pageshttp://dx.doi.org/.//

    Review ArticleDevelopment and Evaluation of Automotive SpeechInterfaces: Useful Information from the Human Factorsand the Related Literature

    Victor Ei-Wen Lo1,2 and Paul A. Green1

    Driver Interace Group, University o Michigan ransportation ResearchInstitute, BaxterRoad, Ann Arbor, MI -, USA Department o Environmental Health Sciences, School o Public Health, University o Michigan, Washington Heights, Ann Arbor,

    MI -, USA

    Correspondence should be addressed to Victor Ei-Wen Lo; [email protected]

    Received October ; Revised January ; Accepted January

    Academic Editor: Motoyuki Akamatsu

    Copyright V. E.-W. Lo and P. A. Green. Tis is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.

    Drivers ofen use inotainment systems in motor vehicles, such as systems or navigation, music, and phones. However, operatingvisual-manual interaces or these systems can distract drivers. Speech interaces may be less distracting. o help designing easy-to-use speech interaces, this paper identies key speech interaces (e.g., CHA, Linguatronic, SYNC, Siri, and Google Voice), theireatures, and what was learned rom evaluating them and other systems. Also included is inormation on key technical standards(e.g., ISO , IU P.) and relevant design guidelines. Tis paper also describes relevant design and evaluation methods (e.g.,Wizard oOz) andhow tomake driving studies replicable (e.g., by reerencingSAE J). Troughoutthe paper, there is discussiono linguistic terms (e.g.,turn-taking) and principles (e.g.,Grices Conversational Maxims) that provide a basis or describing user-device interactions and errors in evaluations.

    1. Introduction

    In recent years, automotive and consumer-product man-uacturers have incorporated speech interaces into theirproducts. Published data on the number o vehicles sold withspeech interaces is not readily available, though the numbers

    appear to be substantial. Speech interaces are o interestbecause visual-manual alternatives are distracting, causingdrivers to look away rom the road, and increasing crashrisk. Stutts et al. [] reported that adjusting and controllingentertainment systems and climate-control systems and usingcell phones accounted or % o all crashes related todistraction. Te act that the use o entertainment systemsis ranked the second among major causes o these crashesarises the argument that speech interaces should be used ormusic selection. simhoni et al. [] reported that % lesstime wasneededor drivers to enter an address using a speechinterace as opposed to using a keyboard, indicating that aspeech interace is preerred or that task. However, using a

    speech interace still requires cognitive demand, which caninterere with the primary driving task. For example, Lee etal. [] showed that drivers reaction time increased by mswhen using a complex speech-controlled email system (threelevels o menus with our-to-seven options or each menu) incomparison with a simpler alternative (three levels o menus

    with two options per menu).Given these advantages, suppliers and automanuacturershave put signicant effort into developing speech interacesor cars. Tey still have a long way to go. Te inuentialautomotive.com website notes the ollowing []:

    In the . . ., the biggest issue ound in todaysvehicles are the audio, inotainment, and nav-igation systems lack o being able to recognizevoice commands. Tis issue was the source omore problems than engine or transmission issues.. . . Over the our years that the survey questions

    people on voice recognition systems, problemshave skyrocketed percent.

  • 8/11/2019 924170

    2/14

    International Journal o Vehicular echnology

    Consumer Reports [] said the ollowing:

    I was eeling pretty good when I spotted thatlittle Microsof badge on the center console. NowI would be able to access all o those cool SYNC

    eatures, right? Wrong.

    When I tried to activate text to speech, I wasgreeted with a dreadul Not Supported display. Iracked my brain. Did I do something wrong? Aferall, my phone was equipped seemingly with every

    eature known to man.. . .But most importantly,it was powered by Microsof just like the SYNCsystem on this Mustang.

    Needing guidance, I went to Fords SYNC website.. . ., I was able to download a -page PDF doc-ument that listed supported phones. (Tere is aninteractive Sync compatibility guide here, as well.)While I had naively assumed that my high-tech

    Microsof phone would work with all the eatureso SYNC powered by Microsof, the documentveried that this was not the case. . . . ext tospeech would only work with a small handul odumbphones that arent very popular anymore.

    Anyone remember the Motorola Razr? Tat phonewas pretty cool a couple o years ago.

    One consumer, in commenting about the Chrysler UConnectsystem said the ollowing []:

    I have a problem with Uconnect telephone. I inputmy voice tags but when I then say Call Mary thesystem either deaults to my phone book older or

    I get names on the screen and am asked toselect a line. I should just say call Mary homethen I should here my voice with calling Maryhome is that correct. Can you assist?

    Tus, it is appropriate to ask what is known now aboutthe design and evaluation o speech interaces or cars andhow they can be improved. Most engineered systems rely onmodels, equations, and data to predict system perormanceand evaluate system alternatives early in development. Teydo not exist or speech interaces. Tus, or speech interaces,the emphasis has been on usability testing, ofen conductedwhen development is nearly complete and changes are costlyto make.

    o be more specic, this paper summarizes the stateo the art relating to speech interace design in general, aswell as a particular simulation model, namely, one to predictuser perormance when interacting with a speech interaceor destination entry and music selection. Te model to bedeveloped will allow or exploration o multiple alternativearchitectures, recognition rates, and command sets, mattersthat are very expensive to explore experimentally.

    Te questions addressed in this paper are as ollows.

    () What are some examples o automotive speech inter-aces?

    () Who uses speech interaces, or what, and how ofen?

    () What are the key research results o the user per-ormance using speech interaces compared with theuser perormance using visual-manual interaces?

    () How should speech interaces be designed? Whatare the key design standards and reerences, designprinciples, and results rom research?

    () How should speech interaces be assessed and whatshould be measured?

    o answer these questions, both the human actors litera-ture and the linguistics terms and principles are summarized.

    2. What Are Some Examples of AutomotiveSpeech Interfaces?

    In the USA, current speech interaces include Ford SYNC,Chrysler UConnect, GM MyLink, Hyundai Genesis, andoyota navigation with Entune. Te commonly supportedapplications are navigation (e.g., destination entry, routeguidance, and traffic inormation) and music selection(selecting, playing, and pausing songs on MP players, AM/FM/XM radios), as well as those related to cellular phones(answering and placing calls, searching contact lists, and

    various tasks associated with text messages).o help the reader understand these applications, some

    examples ollow. Te extent to which the user and machineare verbose varies between examples.

    Navigation

    : Command please.

    : Find address.

    : Find address. Te city please. Deault Stateis Michigan

    : Michigan City.

    : Please select the respective line or start spelling.

    : M I C H I G A N.

    : Spell the name in blocks o letters or enter yourdestination again with change state or enter city.

    : Change state.

    : Please enter the state name.

    : Indiana.

    : Please select the respective line or start spelling.

    : Line one.: Line one. Indiana. Te city please.

    : Michigan City.

    :Michigan City. What is the street?

    : Wabash Street.

    : Wabash Street. And house number or i youdo not know that please say show map or start

    guidance.

    : Six zero one.

    : Six zero one. Show map or start guidance.

    : Start guidance

    : Start guidance.

  • 8/11/2019 924170

    3/14

    International Journal o Vehicular echnology

    : Examples o well-known automotive speech interaces and the applications they support.

    System Research/product Navigation Restaurant

    nderMusic

    selection Audio + CD

    Carinormation

    rafficinormation

    Cellularphone

    extmessage

    CHA [,] Research X X X

    CU Move [] Research X X

    Ford Model U [] Research X X X X

    Linguatronic [] Product X X X X

    SENECA [] Research X X X

    SYNC [] Product X X X X X

    VOIC [] Research X X X X

    Volkswagen [] Product X X

    Restaurant Finder []

    : Can you nd me a Chinese restaurant in PaloAlto with good service?

    : Tere are Chinese restaurants in Palo Alto

    that have good service. Here are the rst ew: Lit-tle Garden Restaurant China Delight MandarinGourmet Restaurant. Should I continue?

    : Hmm. . .ell me more about, Garden Restau-rant?

    : Little Garden Restaurantis an expensive ChineseRestaurant. Dress Code is casual, service is goodand rating is three stars. It is on El Camino Realin Palo Alto.

    : Switch to navigation.

    : ake me to little Garden Restaurant, but stop byBosch Page Mill Office.

    . . .

    Music Selection

    : Please say a command.

    : USB.

    : USB. Is it correct?

    : Yes.

    : USB. Please say a command.

    : PLAY RACK Like a Prayer.

    : Play track Like a prayer.

    Phone call

    : Main menu.

    : Phone.

    : Phone. Please say a command.

    : Call XXX XXX XXXX.

    : Calling XXX XXX XXXX.

    Radio

    : AM one thousand six hundred.

    [It did not work i the user said AM SixteenHundred or this interace.]

    : AM one thousand six hundred.

    Tese systems were developed based on ideas rom anumber o predecessor systems (ablesand). Notice thatthe core unctions were navigation, music selection, andcellular phone support, and that many o them started out aseither university or collaborative research projects involving

    several partners. In several cases, the result was either aproduct or ideas that later led to products. O them, probablySYNC has received most o the attention.

    Te CHA system uses an event-based, message-orientedsystem or the architecture with core modules o NaturalLanguage Understanding (NLU), Dialogue Manager (DM),Content Optimization (CO), Knowledge Management (KM),and Natural Language Generation (NLG). CHA uses theNuance . speech recognition engine with class-based n-grams and dynamic grammars, and Nuance Vocalizer as theext-to-Speech engine. Tere are three main applicationsnavigation, MP music player, and restaurant ndertorepresent important applications in a vehicle [, ]. Te

    example or restaurant nder shown earlier is a CHAdialog.

    CU-Move system is an in-vehicle, naturally spoken dia-logue system, which can get real-time navigation and route-planning inormation [].Te dialogue system isbased ontheMI Galaxy-II Hub architecture with base components romCU-Communication system, which is mixed initiative andevent driven. Tis system automatically retrieves the drivingdirection through Internet with route provider. Te dialoguesystem uses the CMU Sphinx-II speech recognizer or speechrecognition and Phoenix Parser or semantic parsing.

    A prototype o a conversation system was implementedon the Ford Model U Concept Vehicle and was rst shown

    in []. Tis system is used or controlling severalnoncritical automobile operations using speech recognitionand a touch screen. Te speech recognizer used in this systemwas speechGo with adapted acoustic model and otherenhancements to improve the speech accuracy. Te dialoguemanager was a multimodal version o the EUDE, describedby a recursive transition network. Supported applicationswere climate control, telephone, navigation, entertainment,and system preerences.

    Linguatronic is a speech-based command and controlsystem or telephone, navigation, radio, tape, CD, and otherapplications. Te recognizer used in this device was speaker-independent [].

  • 8/11/2019 924170

    4/14

    International Journal o Vehicular echnology

    : Origins o some well-known automotive speech applications.

    System Full name Developed by

    CHA Conversational Helper or Automotive asks

    Center or the Study o Language and Inormation at StanordUniversity, Research and echnology Center at Bosch, ElectronicsResearch Lab at Volkswagen o America, and Speech echnology andResearch Lab at SRI International [,].

    CU Move Colorado University Move University o Colorado speech group in [].

    Ford Model U Ford [].

    Linguatronic DaimlerChrysler Research and echnology in Ulm, Germany and

    EMIC in [].

    SENECASpeech control modules or Entertainment,Navigation, and communication Equipmentin CArs

    EU-project involving DaimlerChrysler, EMIC Research andDepartment o Inormation echnology, University o Ulm [].

    SYNC Ford in collaboration with Microsof and Nuance [].

    VOIC Virtual Intelligent Co-DriverEuropean project unded by ve different partners: Robert BoschGmbH, DaimlerChrysler AG, ICirst, the University o SouthernDenmark, and Phonetic opographics N.V [].

    Volkswagen Volkswagen [].

    SENECA SLDS consists o ve units: COMMAND headunit connected via an optical Domestic Digital Bus to theGlobal System or Mobile Communication module, the CDChanger, and Digital Signal Processing module []. Tesystem is a command-based speech control o entertainment(radio and CD), navigation, and cellular phones. Te speechrecognition technology o SENECA SLDS is based on thestandard Linguatronic system using the ollowing methods tomatch the user speech: spell matcher, Java Speech GrammarFormat, voice enrollments (user-trained words), and textenrolments. For the dialogue processing, the ENECA SLDSuses a menu-based Command & Control dialogue strategy,including top-down access or main unction and side accessor subunction.

    SYNC is a ully integrated, voice-activated in-vehiclecommunication and entertainment system [] or Ford,Lincoln, and Mercury vehicles in North America. Usingcommands in multiple languages, such as English, Frenchor Spanish, drivers can operate navigation, portable digitalmusic players, and Bluetooth-enabled mobile phones. Teexample or music selection shown earlier is a SYNC dialog.

    VICO was a research project that concerned a natural-language dialogue prototype []. As the interace did notexist, researchers used the Wizard o Oz method to collect

    the human-computer interaction data. Here, a human oper-ator, the wizard, was simulated system componentsspeechrecognition, natural language understanding, dialogue mod-eling, and response generation. Te goal o this project wasto develop a natural language interace allowing drivers toget time, travel (navigation, tourist attraction, and hotelreservation), car, and traffic inormation saely while driving.

    Volkswagen also developed its own in-vehicle speechsystem []. Detailed inormation about the architecture andmethods used to design the system are not available. Sup-ported applications include navigation and cellular phones.

    Te best-known nonautomotive natural speech interaceis Siri, released by Apple in October . Siri can help users

    make a phone call, nda business andget directions, schedulereminders and meetings, search the web, and perorm othertasks supported by built-in apps on the Apple iPhone S andiPhone .

    Similarly, Googles Voice Actions supports voice searchon Android phones (http://www.google.com/mobile/voice-actions/, retrieved May , ). Tis application supportssending text messages and email, writing notes, callingbusinesses and contacts, listeningto music, getting directions,

    viewing a map, viewing websites, and searching webpages.Both Siri and Voice Actions require off-board processing,which is not the case or most in-vehicle speech interaces.

    3. Who Uses Speech Interfaces, for What, andHow Often?

    Real-world data on the use o speech applications in motorvehicles is extremely limited. One could assume that anyonewho drives is a candidate user, but one might speculate thatthe most technically savvy are the most likely users.

    How ofen these interaces are used or various tasks islargely unknown. Te authors do not know o any publishedstudies on the requency o use o automotive speech inter-

    aces by average drivers, though they probably exist.Te most relevant inormation available is a study by Lo

    etal.[] concerning navigation-system use, which primarilyconcerned visual-manual interaces. In this study, ordi-nary drivers and auto experts (mostly engineers employedby Nissan) completed a survey and allowed the authors todownload data rom their personal navigation systems. Datawas collected regarding the purpose o trips (business wasmost common) and the drivers amiliarity with the destina-tion. Interestingly, navigation systems were used to drive toamiliar destinations. Within these two groups, use o speechinteraces was quite limited, with only two o the ordinarydrivers and two o the auto experts using speech interaces.

  • 8/11/2019 924170

    5/14

    International Journal o Vehicular echnology

    T:Speechinterfaceperformancestatisticsfromselectedbench-topstudies.

    System

    CHAT[]

    CHAT[]

    CUCommunicator

    []

    CUMove[]

    SENECA[]

    VOIC[]

    Volkswagen[]

    Tasks

    ()NAV

    ()Restaurant

    nder

    (RF)

    ()MP

    ()Restaurantnder

    (RF)

    Phonefortravelplan

    NAV

    ()

    NAV

    ()

    Phonedialing

    ()

    Addressbook

    ()NAV

    ()Currenttime

    ()Tourism

    ()Fuel

    ()Carmanual

    ()Hotelreservation

    ()Trafficinformation

    ()NAV

    ()Mapcontrol

    ()Phone

    Completiontime(s)

    .

    Average:

    ()

    ()

    ()

    ()

    ()

    ()

    ()

    Completionrate

    %

    MP:%

    RF:%

    .%

    Average:%

    Turns

    .

    RF:.

    User:

    Machin

    e:

    Wordrecognition

    Accuracy(%)

    NAV:.%

    RF:%

    MP:%

    RF:%

    Worderrorrate(%)

    %

    %

    Usersatisfaction

    rating

    .

    MP:.

    RF:.

    1Turnisdenedasoneuserutterancetothesystemduringadialogexchangebetweentheuserandthesystemwhileattemptingtoperform

    thetask.

    Wordrecognitionaccuracy(WA).

    WA=100(1(Ws+Wi+Wd)/)%.

    :totalnumberofwordsinreferenc

    e.

    Ws:numberofreferencewordswhichweresubstitutedinoutput.

    Wi:numberofreferencewordswhichwereinsertedinoutput.

    Wd:numberofreferencewordswhic

    hweredeletedinoutput.

    Usersatisfactionrating:=strongagreemen

    t;=strongdisagreement.

  • 8/11/2019 924170

    6/14

    International Journal o Vehicular echnology

    : Driving perormance statistics rom selected studies (S: speech; M: manual, K: keyboard).

    Study Method Lane keeping Brake reaction time Peripheral detection time Following distance

    Carter and Graham [] Simulator S < M S < M

    Forlines et al. [] Simulator S < M No diff.

    Garay-Vega et al. [] Simulator No diff.

    Gartner et al. [] On road S < MItoh et al. [] Simulator S < M No diff.

    Maciej and Vollrath [] Simulator S < M

    McCallum et al. [] Simulator No diff. No diff.

    Minker et al. [] On road S < M

    Ranney et al. [] On road S < M (. versus . s)

    Shutko et al. [] Simulator S < M S < M (Except incoming call)

    simhoni et al. [] Simulator S < K S < K ( versus m)

    Villing et al. [] On road

    : ask perormance statistics rom selected studies (S: speech; M: manual).

    Study ask completion time Speech recognizer rate ask completion rate

    Carter and Graham [] S > M .%

    Forlines et al. [] S < M (. versus . s)

    Garay-Vega et al. [] S (dialog-based) > M

    S (query-based) < M

    Gartner et al. []S > M

    Simple: . versus . sComplex: . versus .s

    .% (recognition error rate: .%)

    Minker et al. [] S < M ( versus sec) S < M ( versus %)

    Ranney et al. [] No difference

    Shutko et al. [] S < M (except dialing phone)

    Villing et al. [] S > M

    Te paper also contains considerable details on the methodo address entry (street address being used about hal o thetime ollowed by point o interest POI) andother inormationuseul in developing evaluations o navigation systems.

    Also relevant is the Winter et al. [] data on typicalutterance patterns or speech interaces, what drivers wouldnaturally say i unconstrained. Included in that paper is inor-mation on the number and types o words in utterances, therequency o specic words, and other inormation needed torecognize driver utterances or radio tuning, music selection,phone dialing, and POI and street-address entry. akeda et

    al. [] present related research on in-vehicle corpora, whichmay be a useul resource to address on who, when, and howofen the driver used the speech interaces.

    4. What Are the Key Research Results of theUser Performance Using Speech InterfacesCompared with the User Performance Using

    Visual-Manual Interfaces?

    Tere have been a number o studies on this topic. Readersinterested in the research should read Baron and Green []and then read more recent studies.

    Using Baron and Green [] as a starting point, studieso the effects o speech interaces on driving are summarizedin our tables. able summarizes bench-top studies o

    various in-vehicle speech interaces. Notice that the value othe statistics varied quite widely between speech interaces,mainly because the tasks examined were quite different. Asan example or CU-Communicator [], the task required thesubject to reserve a one-way or round-trip ight within oroutside the United States with a phone call. Perorming thistask involved many turns between users and machines (total turns) and the task took almost . minutes to complete.

    Within speech interaces, task-completion time varied romtask to task depending on the task complexity [,].

    able , which concerns driving perormance, shows thatthe use o speech interaces as opposed to visual-manualinteraces led to better lane keeping (e.g., lower standarddeviation o lane position).

    able shows that task completion times orspeech inter-aces were sometimes shorter than that or visual-manualinteraces and sometimes longer, even though people speakaster than they can key in responses. Tis difference is dueto the inability o the speech interace to correctly recognizewhat the driver says, requiring utterances to be repeated.Speech recognition accuracy was an important actor that

  • 8/11/2019 924170

    7/14

  • 8/11/2019 924170

    8/14

    International Journal o Vehicular echnology

    topic, but what and when are unknown. In addition, variousIU documents that concern speech-quality assessment maybe relevant, though they were intended or telephone appli-cations. IU-P. (methods or subjective determination otransmission quality) and related documents are o particularinterest. Seehttp://www.itu.int/rec/-REC-P/e/.

    .. Key Books. Tere are a number o books on speechinterace design, with the primary reerences being Hoppersclassic [], Balentine andMorgan [],Cohenetal.[], andHarris []. A more recent reerence is Lewis [].

    .. Key Linguistic Principles. Te linguistic literature pro-vides a ramework or describing the interaction, the kindso errors that occur, and how they could be corrected. Fourtopics are touched upon here.

    ... urn and urn-aking. When can the user speak?When does the user expect the system to speak? aking

    a turn reers to an uninterrupted speech sequence. Tus,the back-and-orth dialog between a person and a deviceis turn-taking, and the number o turns is a key measureo an interaces usability, with ewer turns indicating abetter interace. In general, overlapping turns, where bothparties speak at the same time, account or less than %o the turns that occur while talking []. Te amount otime between turns is quite small, generally less than a ewhundred milliseconds. Given the time required to plan anutterance, planning starts beore the previous speaker nishesthe utterance.

    One o the important differences between human-humanand human-machine interactions is that humans ofen pro-

    vide nonverbal eedback that indicates whether they under-stand what is said (e.g., head nodding), which acilitatesinteraction and control o turn-taking.Most speech interacesdo not have the ability to process or provide this type oeedback.

    A related point is that most human-human interactionsaccept interruptions (also known as barge-in), which makesinteractions more efficient and alters turn taking. Manyspeech interaces do support barge-in, which requires theusers to press the voice-activation button. However, less than% o subjects (unpublished data rom the authors) knewand used this unction.

    ... Utterance ypes (Speech Acts). Speech acts reer to thekinds o utterances made and their effect []. According toAkmajian et al. [], there are our categories o speech acts.

    (i) Utterance acts include uttering sounds, syllables,words, phrases, and sentences rom a languageincluding ller words (umm).

    (ii) Illocutionary acts include asking, promising, answer-ing, and reporting. Most o what is said in a typicalconversation is this type o act.

    (iii) Perlocutionary acts are utterances that produce aneffect on the listener, such as inspiration and persua-sion.

    (iv) Propositional acts are acts in which the speaker reersto or predicts something.

    Searle [] classies speech acts into ve categories.

    (i) Assertives commit the speaker to address something(suggesting, swearing, and concluding).

    (ii) Directives get the listener to do something (asking,ordering, inviting).

    (iii) Commissives commit the speaker to some uturecourse o action (promising, planning).

    (iv) Expressives express the psychological state o thespeaker (thanking, apologizing, welcoming).

    (v) Declarations bring a different state to either speakeror listener (such as You are red).

    ... Intent and Common Understanding (ConversationalImplicatures and Grounding). Sometimes speakers can com-municate more than what is uttered. Grice [] proposed

    that conversations are governed by the cooperative principle,which means that speakers make conversational contribu-tions ateach turn to achieve the purpose ordirection oa con-

    versation. He proposed our high levels conversational max-ims that may be thought o as usability principles (able ).

    ... Which Kinds o Errors Can Occur? Skanztze []providesone o the best-known schemes orclassiying errors(able ). Notice that Skanztze does so rom the perspectiveo a device presenting an utterance and then processing aresponse rom a user.

    Veronis [] presents a more detailed error-classicationscheme that considers device and user errors, as well as thelinguistic level (lexical, syntactic, semantic).able is anenhanced version o that scheme. Competence, one o thecharacteristics in his scheme, is the knowledge the user haso his or her language, whereas perormance is the actual useo the language in real-lie situations []. Competence errorsresult rom the ailure to abide by linguistic rules or rom alack o knowledge o those rules (the inormation rom usersis notin the database), whereas perormance errorsare madedespite the knowledge o rules (the interace does not hearusers input correctly).

    As an example, a POI category requested by the user thatwas not in the database would be a semantic competenceerror. Problems in spelling a word would be a lexical peror-

    mance error. Inserting an extra word in a sequence (iPodiPod play. . .) would be a lexical perormance error.

    A well-designed speech interace should help avoidingerrors, and, when they occur, acilitating correction.Strategies to correct errors include repeating and rephrasingthe utterances, spelling out words, contradicting a systemresponse, correcting using a different modality (e.g., manualentry instead o speech), and restarting, among others[].

    Knowing how ofen these strategies occur suggests whatneeds to be supported by the interace. Te SENECA project[,] revealed that the most requent errors or navigationtasks were spelling problems o various types, entering or

  • 8/11/2019 924170

    9/14

    International Journal o Vehicular echnology

    : Grices conversational maxims (with examples added by the authors).

    Maxim Example Guidance

    Maxim o Quantity:be inormative.

    Machine: Please say the street name.User: Baxter Road ( is the housenumber)

    (i) Make your contribution o inormation as isrequired, that is, or the current purpose o theconversation.(ii) Do not make your contribution more inormativethan is required.

    Maxim o Quality:make your contributionone that is true:

    U: oledo Zoo, Michigan (but oledo is in Ohio) (i) Do not say what you believe to be alse.

    (ii) Do not say that or which you lack evidence.

    Maxim o Relevance:be relevant.

    U: I want to go to Best Buy and the systemresponds with all Best Buy stores, including thosehundreds o miles away, not just the local ones.

    Maxim o Manner:be perspicuous

    (i) M: Please say set as destination, dial or back.U: Dial. O no Dont dial, Back (User want tosay back)

    (ii) M: Please say the POI category.U: Lets see. Recreation

    (i) Avoid obscurity o expression(ii) Avoid unnecessary ambiguity(iii) Be brie (avoid unnecessary prolixity)(iv) Be orderly.

    : Examples o errors in different modules o speech-controlled interaces (adapted rom Skanztze []).

    Modules Possible sources o errors

    Speech detection runcated utterances, artiacts such as noise and side talk; barge-in problems

    Speech recognition Insertions, deletions, substitutions

    Language processing/parsing Concept ailure, speech act tagging

    Dialogue manager Error in reerence resolution, error in plan recognition

    Response generation Ambiguous reerences, too much inormation presented at once, S quality, audio quality

    : Enhanced version o Veronis [] error-classication scheme.

    System User

    Perormance Competence Perormance Competence

    Lexical level(word)

    Letter substitutionLetter insertionLetter deletion

    Word missing in dictionaryMissing inection rule

    Letter substitutionLetter insertionLetter deletion

    Letter transpositionSyllabic error

    Slips o tongue

    Nonword or completelygarbled word

    Syntactic level(sentence structure)

    Missing rule

    Word substitutionWord insertionWord deletion

    Word transposition

    Construction error

    Semantic level(meaning)

    Incomplete or contradictoryknowledge representationUnexpected situation

    Conceptual errorincludes:incomplete or contradictoryknowledge representationPragmatic errorincludes:dialogue law violation

    choosing the wrong street, and using wrong commands. Forphone dialing tasks, the most requent errors were stopswithin digit sequences. In general, most o the user errorswere vocabulary errors (partly spelling errors), dialogue owerrors, and PA (push to active) errors, that is, missing orinappropriate PA activation.

    Lo et al. [] reported that construction and relationshiperrors were % and %, respectively. Construction errorsoccur when subjects repeat words, orget to say commandwords (a violation o grounding), or orget to say any otherwords that were given. Relationship errors occur whensubjects make incorrect matches between the given words

  • 8/11/2019 924170

    10/14

    International Journal o Vehicular echnology

    : Variables used or evaluating entire systems or system modules.

    Module Variables

    Whole system ask completion time, task completion rate, transaction success, number o interaction problems, query

    density, concept efficiency

    Speech recognition Word and sentence error rate, vocabulary coverage, perplexity

    Speech synthesizer User perception, speech intelligibility, pleasantness, naturalnessLanguage understanding Lexical coverage, grammar coverage, real-time perormance, concept accuracy, concept error rate

    and song title, album name, and/or artist name. Relationshiperrors were common because subjects were not amiliar withthe given songs/albums/artists.

    6. How Should Speech Interfaces Be Assessedand What Should Be Measured?

    .. What Methods Should Be Used? Given the lack o modelsto predict user perormance with speech interaces, the eval-

    uation o the saety and usability (usability testing) o thoseinteraces has become even more important. Evaluationsmayeither be perormed only with the system itsel (on a benchtop) or with the system integrated into a motor vehicle (or asimulator cab) while driving.

    Te most commonly used method to evaluate in-vehiclespeech interaces is the Wizard o Oz method [,,, ,], sometimes implementedusing Suede []. Ina Wizard oOz experiment, subjects believe that they are interacting witha computersystem, nota personsimulating one. Te wizard(experimenter), who is remote rom the subject, observesthe subjects actions and simulates the systems responses inreal-time. o simulate a speech-recognition application, the

    wizard would type what users say, or in a text-to-speechsystem, they read the text output, ofen in a machine-like

    voice. Usually, it is much easier to tell a person how to emulatea machine than to write the sofware to tell a computer to doit. Te Wizard o Oz method allows or the rapid simulationo speech interaces and the collection o data rom usersinteracting with a speech interace, allowing or multipleiterations o the interace to be tested and redesigned.

    .. What Should Be Measured?Dybkjaer has written severalpapers on speech interace evaluation, the most thorough owhich is Dybkjr et al. []. Tat paper identied a numbero variables that could be measured (able ), in part because

    there are many attributes to consider.Walker et al. [] proposed a ramework o usability

    evaluation o spoken dialogue systems, known as PARADISE(PARAdigm or DIalogue System Evaluation). (See []or criticisms.) Equations were developed to predict dialogefficiency (which depends on mean elapsed time and themean number o user moves) and dialog quality costs (whichdepends on the number o missing responses, the number oerrors, and many other actors, and task success, measured bythe Kappa coefficient and dened below):

    = ( () ())

    1 (), ()

    where () = proportion o times that the actual set odialogues agree with scenario keys; () = proportion otimes that the dialogues and the keys are expected to agreeby chance.

    In terms o perormance while driving, there is no stan-dard or common method or evaluating speech interaces,with evidence rom bench-top, simulator, and on-road exper-iments being used. Tere are two important points to keep inmind when conducting such evaluations. First, in simulator

    and on-road experiments, the perormance on the secondaryspeech interace task depends on the demand o the primarydriving task. However, the demand or workload o that task israrely quantied [,]. Second, there is great inconsistencyin how secondary-task perormance measures are dened, ithey are dened at all, making the comparison o evaluationsquite difficult []. (See [] or more inormation.) Usingthe denitions in SAE Recommended Practice J [] isrecommended.

    7. Summary

    Te issues discussed in this paper are probably just a ew o

    those which should be considered in a systematic approachto the design and development o speech interaces.

    .. What Are Some Examples o Automotive Speech Interaces?Common automotive examples include CHA, CU Move,Ford Model U, Linguatronic, SENECA, SYNC, VOIC, andVolkswagen. Many o these examples began as collaborativeprojects that eventually became products. SYNC is the bestknown.

    Also important are nonautomotive-specic interacesthat will see in-vehicle use, in particular, Apple Siri or theiPhone and Google Voice Actions or Android phones.

    .. Who Uses Speech Interaces, or What, and How Ofen?Unortunately, published data on who uses speech interacesand how real drivers in real vehicles use them is almost zero.Tere are several studies that examine how these systems areused in driving simulators, but those data do not address thisquestion.

    .. What Are the Key Research Results o the User PerormanceUsing Speech Interaces Compared with the User PerormanceUsing Visual Manual Interaces? o understand the underly-ing research, Baron and Greens study [] is a recommendedsummary. Due to the difference o task complexity whiletesting, comparing alternative speech systems is not so easy.

  • 8/11/2019 924170

    11/14

    International Journal o Vehicular echnology

    However, when compared with visual-manual interaces,speech interaces led to consistently better lane keeping,shorter peripheral detection time, lower workload ratings,and shorter glance durations away rom the road. askcompletion time was sometimes greater and sometimes less,depending upon the study.

    .. How Should Speech Interaces Be Designed? What Are theKey Design Standards and Reerences, Design Principles, andResults rom Research?Tere are a large number o relevanttechnical standards to help guide speech interaces. In termso standards, various ISOstandards (e.g., ISO, ISO,ISO ) ocus on the assessment o the speech interaction,not on design. Speech-quality assessment is considered byIU-P.. For design, key guidelines include []. Anumber o books also provide useul design guidance includ-ing [].

    Finally, the authors would recommend that any indi-vidual seriously engaged in speech-interace design shouldunderstand the linguistic terms and principles (turns,speechacts,grounding, etc.) as the literature provides several useulrameworks or classiying errors and inormation that pro-

    vides clues as to how to reduce errors associated with using aspeech interace.

    .. How Should Speech Interaces Be Assessed and WhatShould Be Measured? Te Wizard o Oz method is commonlyused in the early stages o interace development. In thatmethod, an unseen experimenter behind the scenes simulatesthe behavior o a speech interace by recognizing what theuser says or is speaking in response to what the user says,or both. Wizard o Oz simulations take much less time toimplement than other methods.

    As automotive speech interaces move close to produc-tion, the saety and usability o those interaces are usuallyassessed in a driving simulator, and sometimes on the road.Te linguistics literature provides a long list o potential mea-sureso the speech interace that could be used, with task timebeing the most important. Driving-perormance measures,such as standard deviation o lane position andgap variability,are measured as eyes-off-the-road time. Tese studies ofenhave two key weaknesses: () the demand/workload o theprimary task is not quantied, yet perormance on thesecondary speech task can depend on its demand and ()measures and statistics describing primary task perormanceare not dened. A solution to the rst problem is to use

    equations being developed by the second author to quantiyprimary task workload. Te solution to the second problemis to use the measures and statistics in SAE RecommendedPractice J [] and reer to it.

    Driver distraction is and continues to be a major con-cern. Some view speech interaces as a distraction-reducingalternative to visual-manual interaces. Unortunately, at thispoint, actual use by drivers anddata on that use is almostzero.Tere is some inormation on how to test speech interaces,but technical standards cover only a limited number oaspects.

    Tere is verylittle to support design other than guidelines.For most engineered systems, developers use equations and

    models to predict system and user perormance, with testingserving as verication o the design. For speech interaces,those models do not exist. Tis paper provides some o thebackground inormation needed to create those models.

    References

    [] J. C. Stutts, D. W. Reinurt, and L. Staplin, Te roleo driverdistraction in traffic crashes, AAA Foundation orraffic Saety, Washington, DC, USA, ,https://ww w.aaao-undation.org/sites/deault/les/distraction%%%.pd.

    [] O. simhoni, D. Smith, and P. Green, Address entry whiledriving: speech recognition versus a touch-screen keyboard,Human Factors, vol. , no. , pp. , .

    [] J. D. Lee, B. Caven, S. Haake, and . L. Brown, Speech-basedinteraction with in-vehicle computers: the effect o speech-based e-mail on drivers attention to the roadway, HumanFactors, vol. , no. , pp. , .

    [] F. Weng, B. Yan, Z. Feng et al., CHA to your destination, inProceedings o the th SIGdial Workshop on Discourse and

    Dialogue, pp. , Antwerp, Belgium, .[] F. Weng, S. Varges, B. Raghunathan et al., CHA: a conversa-

    tional helper or automotive tasks, in Proceedings o the thInternational Conerence on Spoken Language Processing (Inter-Speech/ICSLP ), pp. , Pittsburgh, Pa, USA, Septem-ber .

    [] J. H. L. Hansen, J. Plucienkowski, S. Gallant, B. Pellom, andW. Ward, CU-Move: robust speech processing or in-vehiclespeech system, inProceedings o the International Conerenceon Spoken Language Processing (ICSLP ), vol. , pp. ,Beijing, China, .

    [] R. Pieraccini, K. Dayanidhi, J. Bloom et al., Multimodalconversational systems or automobiles, Communications o theACM, vol. , no. , pp. , .

    [] P. Heisterkamp, Linguatronic product-level speech system orMercedes-Benz cars, in Proceedings o the st InternationalConerence on Human Language echnology Research, pp. ,Association or Computational Linguistics, San Diego, Cali,USA, .

    [] W. Minker, U. Haiber, P. Heisterkamp, and S. Scheible, TeSENECA spoken language dialogue system,Speech Communi-cation, vol. , no. -, pp. , .

    [] Sync,http://www.ord.com/syncmyride//home/.

    [] P. Geutner, F. Steffens, and D. Manstetten, Design o the VICOspoken dialogue system: evaluation o user expectations bywizard-o-oz experiments, in Proceedings o the rd Interna-tional Conerence on Language Resources and Evaluation (LREC

    ), Las Palmas, Spain, .[] J. C. Chang, A. Lien, B. Lathrop, and H. Hees, Usability eval-

    uation o a Volkswagen Group in-vehicle speech system, inProceedings o the st International Conerence on AutomotiveUser Interaces and Interactive VehicularApplications(ACM ),pp. , Essen, Germany, September .

    [] . Dorchies, Come again? Vehicle voice recognition biggestproblem in J.D. Power and Associates study, http://news.consu-merreports.org/cars///the-ord-sync-system-reads-text-messages-in-theory-mustang.html.

    [] Consumer Reports, Te Ford SYNC system read testmessages. . .in theory, http://news.consumerreports.org/cars///the-ord-sync-system-reads-text-messages-in-theory-mustang.html.

  • 8/11/2019 924170

    12/14

    International Journal o Vehicular echnology

    [] JustAnswer, Chrysler c I have a problems with Uconnecttelephone, http://www.justanswer.com/chrysler/ssy-chrys-ler-c-problem-uconnect-telephone.html.

    [] B. Pellom, W. Ward, J. Hansen et al., University o Coloradodialog systems or travel and navigation, in Proceedings othe st International Conerence on Human language echnol-ogy Research, Association or Computational Linguistics, SanDiego, Cali, USA, .

    [] C. Carter andR. Graham, Experimental comparison omanualand voice controls or the operation o in-vehicle systems, inProceedings o the th riennial Congress o the InternationalErgonomics Association and th Annual Meeting o the HumanFactors and Ergonomics Association (IEA/HFES ), vol. ,pp. , Human Factors and Ergonomics Society, SantaMonica, CA, USA, August .

    [] C. Forlines, B. Schmidt-Nielsen, B. Raj, K. Wittenburg, and P.Wol, A comparison between spoken queries and menu-basedinteraces or in-car digital music selection, inProceedings othe International Conerence on Human-Computer Interaction(INERAC ), pp. , Rome, Italy, .

    [] L. Garay-Vega, A. K. Pradhan, G. Weinberg et al., Evaluationo different speech and touch interaces to in-vehicle musicretrieval systems,Accident Analysis and Prevention, vol. , no., pp. , .

    [] U. Gartner, W. Konig, and . Wittig, Evaluation o manual vs.speech input when using a driver inormation system in realtraffic, inProceedings o the Driving Assessment : Te FirstInternational Driving Symposium on Human Factors in DrivingAssessment, raining and Vehicle Design, Aspen, Colo, USA,.

    [] K. Itoh,Y. Miki, N. Yoshitsugu, N. Kubo, andS. Mashimo, Eval-uation o a voice-activated system using a driving simulator,SAE World Congress & Exhibition, SAE ech --,

    Society o Automotive Engineers, Warrendale, Pa, USA, .[] J. Maciej and M. Vollrath, Comparison o manual vs. speech-

    based interaction with in-vehicle inormation systems, Acci-dent Analysis and Prevention, vol. , no. , pp. , .

    [] M. C. McCallum, J.L. Campbell, J. B. Richman, J.L. Brown, andE. Wiese, Speech recognition and in-vehicletelematics devices:potential reductions in driver distraction, International Journalo Speech echnology, vol. , no. , pp. , .

    [] . A. Ranney, J. L. Harbluk, and Y. I. Noy, Effects o voicetechnology on test track driving perormance: implications ordriver distraction,Human Factors, vol. , no. , pp. ,.

    [] J. Shutko, K. Mayer, E. Laansoo, and L. ijerina, Driver

    workload effectso cell phone, music player, and text messagingtasks with the Ford SYNC voice interace versus handheldvisual-manual interaces, SAE World Congress & Exhibition,SAE ech --, Society o Automotive Engineers,Warrendale, Pa, USA, .

    [] O. simhoni, D. Smith, and P. Green, Destination entry whiledriving: speech recognition versus a touch-screen keyboard,ech. Rep. UMRI--, University o Michigan rans-portation Research Institute, Ann Arbor, Mich, USA, .

    [] J. Villing, C. Holtelius, S. Larsson, A. Lindstrom, A. Seward,and N. Aberg, Interruption, resumption and domain switchingin in-vehicle dialogue, in Proceedings o the th Interna-tional Conerence on Natural Language Processing, pp. ,Gothenburg, Sweden, .

    [] V. E.-W. Lo, P. A. Green, and A. Franzblau, Where do peopledrive? Navigation system use by typical drivers and autoexperts,Journal o Navigation, vol. , no. , pp. , .

    [] U. Winter, . J. Grost, and O. simhoni, Language pattern anal-ysis or automotive natural language speech applications, inProceedings o the nd International Conerence on AutomotiveUser Interaces and Interactive Vehicular Applications (ACM ),pp. , Pittsburgh, Pa, USA, November .

    [] K. akeda, J. H. L. Hensen, P. Boyraz, L. Malta,C. Miyajima,andH. Abut, International large-scale vehicle corpora or researchon driverbehavioron theroad, IEEE ransactionson Intelligentransportation Systems, vol. , pp. , .

    [] A. Baron and P. A. Green, Saety and usability o speechinteraces or in-vehicle tasks while driving: a brie literaturereview, ech. Rep. UMRI--, University o Michiganransportation Research Institute, Ann Arbor, Mich, USA,.

    [] B. Faerber and G. Meier-Arendt, Speech control systems orhandling o route guidance, radio and telephone in cars: resultso a eld experiment, in Vision in VehicleVII, A.G. Gale, Ed.,

    pp. , Elsevier, Amsterdam, Te Netherlands, .[] A. Kun, . Paek,and Z. Medenica,Te effect ospeech interace

    accuracy on driving perormance, in Proceedings o the thAnnual Conerence o the International Speech CommunicationAssociation (Interspeech ), pp. , Antwerp, Belgium,August .

    [] A. W. Gellatly and . A. Dingus, Speech recognition andautomotive applications: using speech to perorm in-vehicletasks, in Proceedings o the Human Factors and ErgonomicsSociety nd Annual Meeting, pp. , Santa Monica,Cali, USA, October .

    [] R. M. Schumacher, M. L. Hardzinski, and A. L. Schwartz,Increasing the usability o interactive voice response systems:research and guidelines or phone-based interaces, Human

    Factors, vol. , no. , pp. , .[] Intuity Conversant Voice Inormation System Version .

    Application Design Handbook, A& Product DocumentationDevelopment, Denver, Colo, USA, .

    [] L. J. Najjar, J. J. Ockeman, and J. C. Tompson, User interacedesign guidelines or speech recognition applications,presented at IEEE VARIS Workshop, Atlanta, Ga, USA,, http://www.lawrence-najjar.com/papers/Userinteracedesign guidelines or speech.html.

    [] Z. Hua and W. L. Ng, Speech recognition interace design orin-vehicle system, inProceedings o the nd International Con-erence on Automotive User Interaces and Interactive VehicularApplications, pp. , ACM, Pittsburgh, Pa, USA, .

    [] ErgonomicsAssessment o Speech Communication, ISO

    Standard , .

    [] ErgonomicsConstruction and Application o ests orSpeech echnology, ech. Rep. ISO/R , .

    [] Inormation echnologyVocabularyPart : ArticialIntelligenceSpeech Recognition and Synthesis, ISO/IECStandard -, .

    [] AcousticsAudiometric est MethodsPart : Speech Audio-metry, ISO Standard -, .

    [] Ergonomics o human-system interactionUsability methodssupporting human-centered design, ISO Standard , .

    [] Road vehiclesErgonomic aspects o transport inormationand control systemsSpecications or in-vehicle auditorypresentation, ISO Standard , .

  • 8/11/2019 924170

    13/14

    International Journal o Vehicular echnology

    [] Voice User Interace Principles and Guidelines (Draf), SAERecommended Practice J, .

    [] R. Hopper, elephone Conversation, Indiana University Press,Bloomington, IN, USA, .

    [] B. Balentine and D. P. Morgan,How to Build A Speech Recog-nition Application, Enterprise Integration Group, San Ramon,

    Cali, USA, .[] M. H. Cohen, J. P. Giangola, and J. Balogh, Voice Interace

    Design, Pearson, Boston, Mass, USA, .

    [] R. A. Harris,Voice Interaction Design, Morgan Kaumann, SanFrancisco, Cali, USA, .

    [] J. R. Lewis, Practical Speech User Interace Design, CRC Press,Boca Raton, Fla, USA, .

    [] S. C. Levinson, Pragmatics, Cambridge University Press, NewYork, NY, USA, .

    [] G. Skanztze, Error detection in spoken dialogue systems,, http://citeseer.ist.psu.edu/cache/papers/cs//http:zSzzSzwww.ida.liu.sezSznlplabzSzgsltzSzpaperszSzGSkantze.pd/error-detection-in-spoken.pd.

    [] J. L. Austin,How o Do Tings With Words, Harvard UniversityPress, Cambridge, Mass, USA, .

    [] A. Akmajian, R. A. Demers, A. K. Farmer, and R. M. Harnish,Linguistics: An Introduction o Language and Communication,MI Press, Cambridge, Mass, USA, th edition, .

    [] J. R. Searle, A taxonomy o illocutionary, in Language, Mindand Knowledge, Minnesota Studies in the Philosophy o Science,K. Gunderson, Ed., vol. , pp. , .

    [] H. P. Grice, Logic and conversation, in Syntax and Semantics :SpeechActs, P. Coole andJ. L. Morgan, Eds., pp. , AcademicPress, New York, NY, USA, .

    [] J. Veronis, Error in natural language dialogue between manand machine, International Journal o Man-Machine Studies,

    vol. , no. , pp. , .[] N. Chomsky, Aspects o Teory o Syntax, Te MI Press,

    Cambridge, Mass, USA, .

    [] M. L. Bourguet, owards a taxonomy o error-handlingstrategies in recognition-based multi-modal human-computerinteraces, Signal Processing, vol. , no. , pp. ,.

    [] C.-M. Karat, C. Halverson, D. Horn, and J. Karat, Patternso entry and correction in large vocabulary continuous speechrecognition systems, inProceedings o the SIGCHI Conerenceon Human Factors in Computing Systems, pp. , ACM,Pittsburgh, Pa, USA, May .

    [] K. Larson andD. Mowatt, Speech error correction: the story othe alternates list,International Journal o Speech echnology,vol. , no. , pp. , .

    [] D. Litman, M. Swerts, and J. Hirschberg, Characterizing andpredicting corrections in spoken dialogue systems,Computa-tional Linguistics, vol. , no. , pp. , .

    [] E.-W. Lo, S. M. Walls, and P. A. Green, Simulation o iPodmusic selection by drivers: typical user task time and patternsor manual and speech interaces, ech. Rep. UMRI--,University o Michigan ransportation Research Institute, AnnArbor, Mich, USA, .

    [] J. F. Kelley, An empirical methodology or writing user-riendlynatural language computer applications, inProceedings o theSIGCHI Conerence on Human Factors in Computing Systems,pp. , ACM, Boston, Mass, USA, .

    [] J. D. Gould, J. Conti, and . Hovanyecz, Composing letters witha simulated listening typewriter,Communications o the ACM,vol. , no. , pp. , .

    [] P. Green and L. Wei-Hass, Te Wizard o Oz: a tool or repaiddevelopment o user interaces, ech. Rep. UMRI--,University o Michigan ransportation Research Institute, AnnArbor, Mich, USA, .

    [] A. K. Sinah, S. R. Klemmer, J. Chen, J. A. Landay, and C. Chen,Suede: iterative, inormal prototyping or speech interaces,in Proceedings o the CHI Proceedings, Association orComputing Machinery, New York, NY, USA, .

    [] L. Dybkjr, N. O. Bernsen, and W. Minker, Evaluation andusability o multimodal spoken language dialogue systems,Speech Communication, vol. , no. -, pp. , .

    [] M. Walker, C. Kamm, and D. Litman, owards developinggeneral models o usability with PARADISE, Natural LanguageEngineering, vol. , pp. , .

    [] M. Hajdinjak and F. Mihelic, Te PARADISE evaluationramework: issues and ndings, Computational Linguistics, vol., no. , pp. , .

    [] J. Schweitzer and P. A. Green, ask acceptability and workloado driving urban roads, highways, andexpressway: ratings romvideo clips, ech. Rep. UMRI--, University o Michiganransportation Research Institute, Ann Arbor, Mich, USA,.

    [] P. Green, B. .-W. Lin, J. Schweitzer, H. Ho, and K. Stone,Evaluation o a method to estimate driving workload inreal time: watching clips versus simulated driving, ech.Rep. UMRI--, University o Michigan ransportationResearch Institute, Ann Arbor, Mich, USA, .

    [] P. Green, Using standards to improve the replicability andapplicability o driver interace research, in Proceedings o theth International Conerence on Automotive User Interaces andInteractive Vehicular (AutomotiveUI ), Portsmouth, UK.

    [] M. R. Savino, Standardized names and denitions or drivingperormance measures[Ph.D.thesis], Department o MechanicalEngineering, ufs University, Medord, Ore, USA, .

    [] Operational Denitions o Driving Perormance Measures andStatistics (Draf), SAE Recommended Practice J, .

  • 8/11/2019 924170

    14/14

    Submit your manuscripts at

    http://www.hindawi.com