Upload
nguyenkhanh
View
221
Download
1
Embed Size (px)
Citation preview
Running head: SUMMATIVE EVALUATION OF EDS
Summative IR Evaluation of EBSCO Discovery Service (EDS)
Ken Norquist
Rutgers University
SUMMATIVE EVALUATION OF EDS 2
Table of Contents
INTRODUCTION……………………………………………………………………….3
ISSUES IN FOCUS……………………………………………………………………...4
EVALUATION METHODS……………………………………………………………..7
RESULTS……………………………………………………………………………….9
CONCLUSIONS & IMPLICATIONS…………………………………………………14
REFERENCES…………………………………………………………………………18
GRAPHICAL APPENDIX……………………………………………………………..19
SUMMATIVE EVALUATION OF EDS 3
Introduction
For this project I’ve focused on a summative evaluation of EBSCO’s Discovery Service
(EDS) from the perspective and benefit of non-traditional students/adult learners in an
academic/higher educational setting, focusing on the system’s usability, effectiveness, efficiency,
and affordances (features). Selection of this IR system and user population stems from real-world
problems and successes experienced while implementing this new federated search agent in the
academic environment in which I work. From increasingly agile search algorithms and a user-
driven need for simplicity in searching, federated agents have established a foothold in IR
environments, as a “Federated search offers a bridge between the reluctant knowledge searcher
and the wealth of information in library databases, a best fit between the ideal and reality”
(Curtis & Dorner, 2005, p. 36). On the heels of other successful federated applications such as
Thomson Gale’s PowerSearch or similar efforts by Alexander Street Press and even their own
Kids Search (Young, 2005, p. 95), EDS was launched by the industry giant EBSCO in early 2009
as “a perfect combination of content and technology, taking into account all of the critical
elements in the research process, and changing the expectations of how a discovery solution can
and should address the needs of its users”. The list of IR issues that EDS suggests it solves is
beyond laudable, and includes “highly refined relevancy ranking, elaborate indexing, instant
access to full text” and more (EDS About Page). EDS relies on “pre-indexed content and
metadata” rather than relying on remote resource connectors as with many other federated
search components, and EDS is able to search “all of the pre-indexed materials—potentially
including catalog records, subscription databases, and web content—more quickly than with
existing metasearch tools” (Hadro, 2009, p. 17). Sounds great, but there’s many a slip twixt the
cup and the lip.
SUMMATIVE EVALUATION OF EDS 4
The system was of particular interest to me because, for reasons dealing with functionality,
usability, and scope, it is both a savior for some users and an enormous obstacle for others. Even
with a laudable list of affordances, there is still a significant chasm appearing between those who
can leverage the familiar EBSCO interface and those who are lost in a sea of too many options
and too little direct linking in the discovery process. The purpose of this evaluation is to map out
both the helpful and hindering elements of this IR system to inform how I and my library service
peers teach the IR system as a part of information literacy efforts, how users can leverage the
‘good’ and avoid the ‘bad’, and ultimately, whether or not this service is worth keeping as a part
of the institution’s wider digital resource base.
Issues in Focus
What drew me to this particular user base and IR system was, as mentioned above, the
perception that this new federated discovery service seems to be having a somewhat negative
effect on the adult learners which comprise my local student population. Before focusing in on
the new discovery platform itself, it seemed pertinent to survey the landscape of thought
surrounding both how these adult learners engage search environments, as well as how federated
searching, a powerful but still developing facet of IR environments, is affecting student learning
behavior.
Research into the information seeking and learning behaviors for adult students (especially
those from such a diverse international cultural landscape as those of my user population)
demonstrates that adult learners survive and thrive best in “learning to learn” environments,
where “learners need to think about or monitor what they do while they are doing it” (ARIS
Information Sheet, 2000, p. 4) —to engage in a cycle of integration of new information, action
SUMMATIVE EVALUATION OF EDS 5
based on that new information, then reflection of that new information and how it fits into the
perspective which began that cycle. In Boud’s 1987 work Appreciating Adults Learning from the
Learner's Perspective, as cited in the ARIS sheet, these steps are discussed as “Association,
Integration, Validation, and Appropriation”, with the argument that “For this to be done
efficiently and realistically learners need to be selective—to focus one or two things at a time”
(ARIS Information Sheet, 2000, p. 3). Here is where EDS breaks down for my user base. As a
powerful federated searching tool, EDS does indeed hold up to its promises of climbing atop
each of your subscription databases with relative ease (from a system perspective) to enable deep
crawls from a single search. Even those who raise an eyebrow at the lofty pedestal on which
information professionals have placed federated search agents admit its benefits, “Patrons will no
longer have to repeat a search in each database. In fact, this time-saving feature might improve
searching because students will find articles from databases they otherwise may not have
searched. Having a more complete search leads to more complete research and, hopefully, better
results” (Baer, 2004, p. 518). Indeed, federated searching is here to stay, as “When it works well
it reduces the time and effort spent in both searching and learning to use the interfaces of various
databases” (Curtis & Dorner, 2005, p. 35), and as Young coined (of federated searching in
libraries) “Welcome to the era of the super search” (Young, 2005, p. 95). But as others have
noted and as this study illuminates, not all federated search environments are helpful for all
users.
The problems encountered in federated search environments are not insignificant. It is without
question that these power searches are able to reach farther for more information in a given time
frame than the previous individual portal approach, but with that extended reach comes lessening
of the power from the user’s query, “the quantity of articles searched increases, but the quality of
SUMMATIVE EVALUATION OF EDS 6
the search is jeopardized. The federated search can't use special features of any individual
database that are not available on all of the databases. Searches are essentially reduced to the
lowest common denominator” (Baer, 2004, p. 519). This lack of specificity and inadvertently
forced ignorance of the unique affordances offered by individual databases is at best a hindrance
and at worst a workflow stoppage factor. Further, without knowing which elements of the
individual database records would be searched by EDS (full text, titles, abstracts, etc.) or whether
the BOOLEAN operators which they’ve been taught to engage will work in all of the databases
being crawled through the federated query, even experienced searchers are left shooting at a
somewhat hidden target (Baer, 2004, p. 519). Finally, and based on the findings of this study in
particular, given the adult learner’s observed need for limited scope or focus in searching (see
note from ARIS info sheet on previous page), the sheer volume of options for object discovery
presented to users in an EDS search is overwhelming to some. While it is true that “federated
searching removes, or at least reduces, the number of decisions patrons need to make at the
beginning of a search” (Baer, 2004, p. 518), the polar opposite is true once the search button is
clicked. Where users once found simplicity with a single-search approach, they now see a myriad
of links, filters, and thousands of results to parse. It is clear that this phenomenon is a significant
factor in the reality of federated search agents in library spheres, “Librarians differ in their
opinions about how federated search results should be presented. Some believe that merged
results are more helpful to student users. Others argue that students and staff might benefit from
being able to identify the database from which the results have originally come” (Curtis &
Dorner, 2005, p. 36). EDS does not offer this sourcing indication at present, and worse, the right-
navigation links offered in EDS to navigate to the individual databases in the federated scope do
not carry the query or filters over into these individual portals—meaning that searchers can
SUMMATIVE EVALUATION OF EDS 7
spend precious time refining an EDS-based query and lose it all when trying to focus in on any
given database whose results were included in the larger federated search.
Evaluation Methods
As shared in my introduction, my intent for this project was to focus on evaluating EDS’s
usability, effectiveness, efficiency, and affordances (features) from an end-user perspective
toward the ultimate goal of determining whether this federated agent is having a positive or
negative effect on the user population and, by extension, whether subscription to the service
should be renewed. The approach I selected was to identify two research queries of varied
complexity common in the day-to-day research practices engaged by my user community in
various research areas—two individual queries for business research related topics, where EDS
searches our subscription databases and retrieves objects from Business Source Complete, Lexis-
Nexis Academic, & Mergent Online, and two individual queries for social science research
topics, where EDS searches our subscriptions and retrieves objects from Academic Source
Complete, ERIC, eLibrary, & Credo Reference. After some review of historical reference
interview notes, syllabi for key (core) courses, and usage stats supplied by database vendors, I
asked users to locate and superficially validate four sources which they felt satisfied the specific
query to the level where they felt relatively comfortable using it in an assignment submission
(syllabi showed an average number of required resources as “3-5”). I honed in on the following
as the queries to be utilized in the EDS evaluation:
Business : BQ1. How are businesses addressing the changing role of social responsibility
in the online marketplace?
BQ2. What are the basic elements of a sound business review?
SUMMATIVE EVALUATION OF EDS 8
Social Science: SQ1. Discuss the strengths and weaknesses of Human Sensory Perception in
legal applications.
SQ2: What is an amphiboly? Give 3 examples.
For this study I was able to recruit 6 volunteers from the affected user population and break
them into two groups: Group 1, hereafter referred to as the EDS Group, and Group 2, hereafter
referred to as the Individual Database (IDB) Group. I was very fortunate to be able to recruit
volunteers for each group at three levels of experience with common academic electronic
resource use, with each group comprised of one novice user—in both groups this is a 1st year
student enrolled in an Associate in Arts in Criminal Justice program, one average user—in both
groups this is a 3rd
year student enrolled in a Bachelor’s of Business Administration program, and
one expert user—in both groups this is a Graduate student enrolled in an Master of Business
Administration program. One group engaged the EDS platform for object discovery while the
other selected and engaged the individual research portals covered by EDS’s wider federated
search and made available from the University’s Learning Resources page.
Each user engagement was timed and was followed by a simple paper questionnaire to gauge
opinions and share feedback. Following the empirical chronological evidence--how long each
engagement took to reach satisfactory information objects—users were asked to apply a
numerical rating to each system engagement, scoring each on a scale of “1 (so bad it’s ugly)” to
“5 (I’d even recommend it to my enemies)”, and were asked for basic empirical evidence and
short narratives assessing the IR environments along the following lines:
Usability
o Learning Oriented Measures:
List the number of obstacles/workflow stoppage points encountered
SUMMATIVE EVALUATION OF EDS 9
o User Oriented Measures:
List the number of “errors” encountered,
List the number of backtracking clicks necessary during search
List the number of navigational moves required.
Effectiveness
o Satisfaction-based measures:
User notes on how well the system supported them in satisfying the
research goal.
Efficiency
o How long did the retrieval engagement take?
Affordances
o Subjective Measures
User notes on usefulness of any notable feature, or on features which
could have been useful if they existed.
*note: As EDS leverages the same EBSCO platform as all other EBSCO services,
evaluating the usefulness of the features and affordances was not of significant
importance—as both groups encountered the same set. This element would be better
suited for an evaluation of the overall EBSCO IR System, but has little bearing on the
goal of this particular evaluation.
Once these surveys were completed and collected, detailed analysis of the individual empirical
findings and narrative feedback took place—with special focus on data which informed
efficiency in IR engagement and overall usability measures, as these facets speak most directly to
the ultimate evaluative goals.
Results
Note on limitations of results: I took careful precautions to present all participants with
balanced instructions, a similarly appointed workspace with comparable internet connection
speeds, and have built each evaluation team with novice, intermediate, and expert users whose
SUMMATIVE EVALUATION OF EDS 10
skill in this form of research fairly approximates that of their counterpart in the other group, as
evidenced by their area of study and current GPA. However as this study was performed alone
and with a sample user base which was terribly limited in size, it should be noted that the
empirical results shared below will have been affected by the individual user’s existing
knowledge base of the subject matter as well as that of the IR interfaces engaged, as well as their
own internal processing and parsing time to determine which objects would be selected as
relevant in the spirit of this study.
After a thorough review of the surveys completed by each of the six participants I began a
point-by-point evaluation of the data presented. Starting with the key measures pertaining to
overall IR system Usability and Efficiency (see Evaluation Methods above for details) it became
immediately clear that the subjectively perceived notion which underscored my expressed need
for this IR evaluation—that EDS is actually creating more problems than it solves for our
particular user community—does indeed hold true to a point, but with some surprising results.
Beginning with the chronological measure of IR system engagement we can see that the
combined engagement time to identify four satisfactory information objects for the EDS group
was 357 minutes—compared with 335 minutes for the IDB group (see figure 1 in the Graphical
Appendix). This examination of total engagement time for each group initially appears somewhat
similar and perhaps not terribly impactful on the subscription renewal decision, however when
you view the individual engagement times for each group’s participants we can see that there is a
greater disparity between the EDS novice user engagement time and that of the IDB group than
for intermediate and expert users in either group. Meaning that for these users, EDS is actually
adding more time to each novice IR engagement than the individual database approach would
SUMMATIVE EVALUATION OF EDS 11
have. This notion is upheld in a few of the narrative notes offered by the participants in this
study’s EDS group…
“The Discovery Service one had too many options and too many places I had to look and
click.” – Chris M.
“Each time I had to go back to find another paper I got lost all over again. I wanted to
quit a few times” – Salieu J.
“I don’t know why they added this single search thing—It took me about 5 minutes to
figure how it work first. There is a pop up that keep getting in the way which I had to
keep refreshing but I still have to spend couple of minutes to get it to work” – Kieu T.
There are some insights to be gained from examination of each user’s engagement time (see
figure 2 in the Graphical Appendix for detailed user chronological data), but perhaps the most
surprising element here is that the nature of the query greatly affects the efficiency of each
engagement, as review of the data shows that the EDS federated searches did perform with a bit
more efficiency when trying to access the more simple-fact based and ‘ready-reference’ queries
in series 2 of the questions (BQ2 & SQ2), where satisfaction of the query is achieved with the
identification of a singular element. The implications of this facet will play heavily into how the
EDS service is introduced to students during Information Literacy instruction (discussed below).
Moving on to the examination of study findings in the empirically measured areas related to
Usability we see that, while the existing knowledge base of the expert users in this study rounded
over many of the sharp edges in comparison of the two groups, the data from novice and
intermediate users shows an increased number of negative usability findings. In short, this data
SUMMATIVE EVALUATION OF EDS 12
shows that EDS users often encountered more errors, workflow stoppage points, and required
more navigational and backtracking clicks to locate relevant information objects than did users
of the individual databases (see figures 3 & 4 in the Graphical Appendix). I believe that this can
be explained partially by the lack of an inclusive breadcrumb trail in EDS allowing our users to
more easily retrace their steps—which, it should be noted, does not exist in the individual
EBSCO databases either—and partially due to what seems to be the overwhelming weakness of
our instance of the EDS service, the failure of EDS to carry queries over into the individual
databases which are covered by the federated search. The EDS user experience begins with a
single query entered into the federated search bar, followed by a list of results delivered in
EBSCO’s familiar results frame. Users can parse through the conglomerate list to find relevant
items and then either click the item or click links directly to the individual holding databases,
however when they navigate from EDS to the individual database the query and all filtering and
customization applied are lost and users are back to square one. As there is communication
between EDS and these databases for the purposes of object discovery I can think of no viable
reason why this connectivity disappears when following that pipeline in the opposite direction.
Once again, these observations are validated by narrative feedback from the EDS group:
“I spent 20 minutes figuring out what restrictions {filters} would help me find the papers
I wanted, and then I clicked the link for Lexis and the bar was blank. I wasted all that
time and had to start over, so I just stayed with Lexis” - Salieu J.
“Why do they even put the links to the other sites in this Discovery page if they don’t do
anything?” – Kieu T.
SUMMATIVE EVALUATION OF EDS 13
The final measures of this evaluation dealt with the overall user satisfaction in each group.
The overall satisfaction measure was somewhat humorously designated along the scale of “1 (so
bad it’s ugly)” to” 5 (I’d even recommend it to my enemies)” and was designed to capture a
simple but significant data point—how did our users feel about this new service? Evaluation of
the ratings assessed showed, somewhat unsurprisingly, that the platform offered little to the
expert users—with their satisfaction scores remaining the same for both groups and their
narrative notes indicating that their existing knowledge of the individual databases rendered the
federated search irrelevant in the face of more direct database engagement. However when the
observation shifts to feedback in these areas from the intermediate and novice users, a severe
discrepancy between the two research approaches appears. From the ratings attributed and the
narrative notes shared, these users found EDS to be overly complex, cumbersome compared to
the individual database portals, and generally got in their way…
“The Discovery page confused me…I had no idea what was for my search and what was
something else. I don’t need that many choices before I know what I’m trying to find.” –
Salieu J.
“Even after I used the advanced search it was hard to find the articles I wanted with all
of the mess they gave me to look through” – Kieu T.
From evaluation of the data collected it is clear that the current attitudes and subscriptions
associated with the EDS service at my institution have been flawed and require examination.
SUMMATIVE EVALUATION OF EDS 14
Conclusion & Implications
At the onset of this evaluation I had expectations that the results gained from this study would
be helpful in a few ways. First, those who are institutionally responsible for the selection,
implementation, and maintenance of the electronic resources should be able to call on this review
to examine whether this service seems to have positively or negatively affected our user
community. In addition, and in direct proportion, I expected the results to inform how we (as
information professionals at the university) introduce our users to best-practices with EDS as a
part of information literacy efforts. Finally, the narrative elements of this evaluation and sense-
making graphic representations of key observations should enable library administration officials
to better understand the strengths and pitfalls of the EDS system from a user-perspective to
encourage more pointed and dynamic feedback based on their use. In short, I believe I have
successfully achieved these goals, if only from the perspective of a small but nonetheless
exemplar group of users.
First, though there are definitive grounds to argue for the retention of this service, both the
empirical data and the narrative responses gathered have indeed established that the new EDS
service is having a generally negative impact on our wider user population. Though there is some
evidence that this federated approach can help users identify ready-reference query solutions
somewhat more quickly (more focused testing on this direct application would be required for
me to champion its retention based on this aspect alone), overall the data shows that from
Effectiveness, Efficiency, and Usability measures EDS is causing more problems than it solves
by extending the system engagement time, offering too many affordances and calls to action for
entry-level searches, and by not allowing users to more easily see and navigated forward and
SUMMATIVE EVALUATION OF EDS 15
backward in their search (see Graphical Appendix for details). The suggestion to combat this
phenomenon would be to adjust how EDS is introduced to students.
The first step would be to remove EDS as the primary research gateway from the BlackBoard
Online Learning Platform and relegate it to a linked service as we have for other research tools.
Students should be able to select EDS if they want, but should be presented with the option of
going directly into one of the individual databases. Adding EDS as a separately chosen service
will drive accurate usage metrics to determine whether student want EDS as an option. As EDS
leverages the EBSCO search and retrieval mechanisms and interface, special attention should be
paid to this platform during Information Literacy training so as to promote more effective and
efficient use of the common affordances found in all EBSCO products—including EDS.
Additionally, during Information Literacy sessions each term we should be placing the
emphasis on this tool as one to utilize once you already have a sound grasp on the affordances
and operations of the databases crawled in EDS's federated search. In Curtis & Dorner’s work on
federated searching, they point to William Frost’s work on metasearching suggesting “selecting a
research tool is one of the first concepts that should be learned” in information literacy. He goes
on to say that metasearching is “a step backward, a way of avoiding the learning process” (Curtis
& Dorner, 2005, p. 35). Further, as Elmborg suggests in his 2010 work on adult learning in
libraries, “(information) literacy is something mobile and flexible, not just a set of skills with
written text” (Elmborg, 2010, p. 73) and our approach to sharing the best possible and most
appropriate use of each of our electronic resources must reflect this notion. He suggests that “the
future of libraries and librarianship cannot be between learners and information…but must be
alongside learners, especially those who didn’t inherit English school literacy” (Elmborg, 2010,
p. 74), as is the case for the majority of those in my user population. Our approach must be one
SUMMATIVE EVALUATION OF EDS 16
which considers the unique circumstances which color our user’s information seeking patterns
and one where we take the time “to figure out where people are emotionally, intellectually, and
cognitively, and to conduct conversations within their zone of proximal development” (Elmborg,
2010, p. 75). Others agree, suggesting that adult learners, who in my user community typically
fall into learning patterns which Witkins could call “field dependent” where users “use their
entire surroundings—including other people—to process information”…and…”more often
report feeling disoriented or lost” (Kerka, 1998, p. 3), prefer to have guidance and a visible list of
options and choices to better frame their perspective on where they’ve been and where they can
or should go. This more evaluate approach to understanding the needs of our particular users in
information literacy engagements should give us a better grasp on the initial state of their
cognitive and research perspectives, as well as a more focused and explanatory approach
ensuring we are focused less on single-search navigation and more on identification and
operation of the wider landscape of research tools which we make available to students, “The
one-stop searching mentality makes teaching good information-seeking habits harder” (Baer,
2004, p. 19).
Finally, efforts should be made to capture a broader sampling of this user community's use of
EDS prior to the renewal of our EBSCO contract, as well as an examination of the
implementation of EDS in our e-resource portal to ensure that all intended functionality is in
place and operating correctly. This study has highlighted a number of issues with the platform,
not least of which is the lack of two-way query inclusion. Users in this study clearly expected
that clicking on the link for one of the crawled databases shown in EDS would carry their query
and filter criteria over onto the new platform—this phenomenon represented a number of the
errors, workflow stoppage points, and backtracking clicks reported by participants. The
SUMMATIVE EVALUATION OF EDS 17
institution needs to investigate whether this lack of functionality can be addressed with differing
subscription levels, implementation structures, or local university IT support—or whether this
will continue to be a normal operation of EDS—and then act accordingly. In my opinion, the
small gains EDS offers for a few users in a few queries is not worth the extra subscription costs
and should be suspended as a service until such time as the recommendations of this study and
others which will hopefully follow are addressed.
SUMMATIVE EVALUATION OF EDS 18
References
Baer, W. (2004). Federated searching: Friend or foe?. College & Research Libraries News,
65(9), 518-519.
Curtis, A., & Dorner, D. G. (2005). Why Federated Search?. Knowledge Quest, 33(3), 35-37.
EBSCO Discovery Service homepage (http://www.ebscohost.com/discovery/about )
Elmborg, J. (2010). Literacies, Narratives, and Adult Learning in Libraries. New Directions
For Adult And Continuing Education, (127), 67-76.
Hadro, J. (2009) EBSCOhost Unveils Discovery Service. Library Journal, 134(8), 17.
Kerka, S. & ERIC Clearinghouse on Adult, C. H (1998) Learning Styles and Electronic
Information. Trends and Issues Alert
Language Australia, M. e. (Victoria). (2009) Adult Education Resource and Information
Service. Learning To Learn. ARIS Information Sheet, 3-5
Young, T.E. (2005). Federated Searching. School Library Journal, 51(12), 95-97
SUMMATIVE EVALUATION OF EDS 19
Graphical Appendix
Figure 1: IR System Engagement by Question
0
50
100
150
200
Time inEDS
Time inIDB
Enga
gem
en
t Ti
me
(in
min
ute
s)
Total IR System Engagement Time
Novice
Intermediate
Expert
Figure 2: Total IR Engagement Time
SUMMATIVE EVALUATION OF EDS 20
0 10 20 30
# of Workflow Stoppage Points
Use
r Ty
pe
IDB Expert
EDS Expert
IDB Interm
EDS Interm
IDB Novice
EDS Novice
Figure 3: # Workflow Stoppage Points
Figure 4: # of Errors Encountered
SUMMATIVE EVALUATION OF EDS 21
EDS Group0
100
200
300
# ofNavigation
Clicks
# ofBacktracking
Clicks
Click Tracking
EDS Group
IDB Group
Figure 5: Clicks Required
0
1
2
3
4
5
Satisfied with theSystem
System HelpedMe Achieve
Goals1 =
Aw
ful
5=
Hel
pfu
l
User Satisfaction Ratings
EDS Novice
EDS Interm
EDS Expert
IDB Novice
IDB Interm
IDB Expert
Figure 6: Satisfaction Ratings
SUMMATIVE EVALUATION OF EDS 22
Eval Criteria EDS Group (Novice + Intermediate + Expert)
IDB Group (Novice + Intermediate + Expert)
Chronological Measure of
Engagement Time per query
(sum of EDSG1-3 times in
min.)
BQ1: (57 + 32 + 16)=105
BQ2: (29 + 16 + 10)=55
SQ1: (78 + 42 + 26)=146
SQ2: (21 + 18 + 12)=51
BQ1: (41 + 24 + 18)=83
BQ2: (33 + 28 + 13)=74
SQ1: (53 + 37 + 23)=113
SQ2: (29 + 20 + 16)=65
# Workflow Stoppage Points BQ1: (9 + 4 + 1)=14
BQ2: (5 + 3 + 0)=8
SQ1: (6 + 7 + 2)=15
SQ2: (2 + 1 + 1)=4
BQ1: (4 + 1 + 0)=5
BQ2: (2 + 2 + 1)=5
SQ1: (10 + 2 + 2)=14
SQ2: (3 + 0 + 1)=4
# of Errors Encountered BQ1: (12 + 4 + 2)=18
BQ2: (6 + 2 + 0)=8
SQ1: (8 + 8 + 2)=18
SQ2: (3 + 2 + 2)=7
BQ1: (5 + 3 + 0)=8
BQ2: (2 + 3 + 2)=7
SQ1: (11 + 3 + 4)=18
SQ2: (3 + 0 + 2)=5
Backtracking Clicks
Necessary
BQ1: (29 + 11 + 5)= 45
BQ2: (15 + 4 + 4)=24
SQ1: (38 + 16 + 9)=63
SQ2: (14 + 8 + 6)=28
BQ1: (21 + 8 + 1)=30
BQ2: (8 + 5 + 4)=17
SQ1: (18 + 11 + 4)=33
SQ2: (11 + 4 + 3)=18
# Nav Moves BQ1: (49 + 28 + 22)=99
BQ2: (28 + 13 + 9)=50
SQ1: (62 + 32 + 24)=118
SQ2: (18 + 7 + 5)=30
BQ1: (23 + 11 + 9)=43
BQ2: (12 + 8 + 6)=26
SQ1: (38 + 26 + 8)=72
SQ2: (10 + 8 + 7)=25
IR Engagement Satisfaction
Ratings (1 – 5)
Novice: 1
Intermediate:2
Expert: 4
Novice:3
Intermediate:4
Expert:4
How Well System
Supported You in Achieving
Goal Ratings (1 – 5)
Novice: 2
Intermediate:2
Expert: 1
Novice:4
Intermediate:4
Expert:5
Table 1: Empirical Data Matrix