Upload
moris-pope
View
216
Download
2
Embed Size (px)
Citation preview
Hao Wu
Nov. 18 2014
Outline• Introduction• Related Work• Experiment
• Methods• Results
• Conclusions & Next Steps
Introduction• Every search engine looks alike.
Why eye tracking in information retrieval?
• Understand how searchers evaluate online search results
• Enhanced interface design• More accurate interpretation of
implicit feedback (eg, clickthrough data)
• More targeted metrics for evaluating retrieval performance
Background
Outline• Introduction• Related Work• Experiment
• Methods• Results
• Conclusions & Next Steps
Related works• Methods: search-engine log files; diary studies; eye
tracking and detailed activity-logging • User Interface: “Faceted browsing” interfaces; dynamically
categorize search results; dynamic filtering and visualization
• Eye-tracking methodologies: develop different user models• Combine with clickthrough data• Scanning order in search result• Pattern of fixations(scanpaths)• Gender difference
Key research questions• Do people look at the same number of search results for
different task types? • Do they attend to different components of search results
for navigational and informational tasks?• Does the inclusion of more contextual information in
search results help with informational tasks?
TASK TYPE * SNIPPET LENGTH
Outline• Introduction• Related Work• Experiment
• Methods• Results
• Conclusions & Next Steps
Methods• Apparatus
• MSN Search as search server• Tobii x50 eye-tracker
• Participants• 18 participants have complete data
• Age: 18 to 50, 11 male, 7 female• At least search web once per week
• Experimental design and procedure• 12 search tasks (6 different tasks for each type)• 3 type of snippet length
• Data collection• Gaze fixations >= 100 ms in AOIs and its sub
elements• Non-gaze-related behavioral measures
• Total time on task• Click accuracy
Examples
Outline• Introduction• Related Work• Experiment
• Methods• Results
• Conclusions & Next Steps
Overall searching behavior 1
Linear order• Attention vs. Ranking?
Overall searching behavior 2• How many other items above and below the selected
document did users look at?
Overall searching behavior 3• Hub- spoke pattern
• Does fixation time on each document change with subsequent visit to the first page?
Task Type & Snippet Length• Measures:
• Repeated Measures Multivariate Analysis of Variance• 2 (Task Type) x 3 (Snippet Length) x 2 (Repetition)
• Main effect test• Task type• Repetition• Snippet length
• Interaction of Task Type & Snippet Length
significant
Not significant
significant
Mean time on task• How much time spend on each task when varied snippet
length?
Click accuracy• How accurate are they when selecting ‘best result’ on first
query page?
Total results fixated• Opposite pattern between navigational and informational
task when varies length from medium to long.
Proportion of total fixation duration• How users distribute their attention to different elements?
Outline• Introduction• Related Work• Experiment
• Methods• Results
• Conclusions & Next Steps
Conclusions
Problem: How varying the amount of information will affect user performance
• Adding information to the contextual snippet• Increase in performance for informational tasks• Decrease in performance for navigational tasks
• Snippet length increased• More attention to the snippet• Less attention to the URL
Snippet length is a dilemma
Future direction• UI for information retrieval• Verify whether or not moving URL above the snippet?• Other types of meta data?