« Back to Advance Program

Papers/Notes: 1001 Users

Thursday, April 15
11:30 AM - 1:00 PM

Think-Aloud Protocols: A Comparison of Three Think Aloud Protocols for use in Testing Data Dissemination Web Sites for Usability
Erica L. Olmsted-Hawala, U.S. Census Bureau, USA
Elizabeth D. Murphy, U.S. Census Bureau, USA
Sam Hawala, U.S. Census Bureau, USA
Kathleen T. Ashenfelter, U.S. Census Bureau, USA

Three think-aloud protocols: traditional, speech-communication, coaching were usability tested. Results show accuracy and satisfaction are significantly higher in the coaching condition. There were no significant differences with respect to efficiency.

Powerful and consistent analysis of Likert-type rating scales
Maurits Kaptein, Eindhoven University of Technology, the Netherlands
Clifford Nass, Stanford University, USA
Panos Markopoulos, Eindhoven University of Technology, theNetherlands

Describes a nonparametric method to analyze data obtained from Likert-type scales in factorial experiments. The approach is invariant under monotone transformations. Accompanying website supports researchers in their analysis process.

Measuring the User Experience on a Large Scale: User-Centered Metrics for Web Applications
Kerry Rodden, Google, USA
Hilary Hutchinson, Google, USA
Xin Fu, Google, USA

Introduces the HEART framework for large-scale metrics of user experience (Happiness, Engagement, Adoption, Retention, and Task success), and the Goals-Signals-Metrics definition process. Includes examples of real applications.

Are your participants gaming the system? Screening Mechanical Turk Workers
Julie S. Downs, Carnegie Mellon University, USA
Mandy B. Holbrook, Carnegie Mellon University, USA
Steve Sheng, Carnegie Mellon University, USA
Lorrie Faith Cranor, Carnegie Mellon University, USA

A screening process to identify non-conscientious survey participants, tested in Amazon.com's Mechanical Turk. Test qualification can be used to exclude problematic participants, who vary systematically in age, sex, and occupation.

Trained to Accept? A Field Experiment on Consent Dialogs
Rainer Boehme, International Computer Science Institute Berkeley, USA
Stefan Koepsell, Technische Universitaet Dresden, Germany

A field experiments with 80.000 users shows that even security-conscious users click on "accept" when a dialog resembles an end-user license agreement, thereby blindly agreeing to possibly unwanted terms.


« Back to Advance Program