LU #9 – Evaluating Techniques II

SLIDES

08-02 (box.fugitlab) | 08-03 (box.fugitlab) | 09-03 (box.fugitlab)

Conducting an Evaluation

  1. Define the purpose, and goals of the usability test
  2. Define the needed participant characteristics
  3. Select an appropriate method (Test Design)
  4. Define a task list
  5. Setting up the test environment, equipment and logistics
  6. Describe the data to be collected and the evaluation measures
  7. Prepare a consent form and check ethical issues
  8. Report the results

Remote Usability Testing

  • Synchronous Remote Usability Testing
  • Asynchronous Remote Usability Testing (Self-moderated and Automated)
    • Auto logging
    • User-reported Critical Incidents (UCI)
    • Unstructured problem reporting
    • Forum-based online reporting and discussion
    • Diary-based longitudinal user reporting
  • Tools for remote testing

Quantification

  • GOMS and KLM
  • Fitts’ Law

Additional Material

Check out the box.fu folder for additional material.

Antti Oulasvirta. 2019. It’s time to rediscover HCI models. interactions 26, 4 (July-August 2019), 52–56. DOI:https://doi.org/10.1145/3330340