måndag 31 oktober 2016

Seminar 2 notes - Joakim

Seminar 2

Seminar 2

Chapter 13

Chapter 13 introduces the concept of evaluation and the authors stresses the importance of evaluation as a part of the design process. It is important to evaluate continuously, known as formative evaluation, to check that the product fulfills the requirements. Both low-tech prototypes and full systems can be considered. At the end of the process a summative evaluation is carried out. Evaluation regards both usability and user experience.

There are different methods that can be broadly categorized as

  • Controlled settings involving users; e.g. experiments in a laboratory for usability testing
  • Natural settings involving users. Field studies in a realistic context
  • Any settings not involving users. Analytics and models.

The first has the advantage of tight control but in an artifical environment which could bias the evaluation data. A natural setting has the advantage of the users being in the correct context (in the wild studies) but it might be difficult to correctly anticipate what will happen. The idea of a living lab is to try and combine these two. A quick and cheap way to assess a prototype or product is to use heuristics, that is apply knowledge of typical users, rules-of-thumb etc to create models of user behavior.

Things to consider

  • Participants rights and consent
  • Reliability (Are the results repeatable)
  • Validity (Are we measuring the intended thing?) and ecological validity (Does the evaluation environment affect the results?)
  • Biases (Results distorted systematically, e.g. a specific group of expert evaluators might be sensitive to one design flaw compared to others)
  • Scope (Can we generalize the results?)

Chapter 14

Here evaluation studies are described in a spectrum of settings, from controlled to natural.

Usability testing

Usability is tested in a controlled setting, such as a lab or temprorary makeshift lab. The idea is to collect quantitative data about users performance on predefined tasks, e.g.

  • Time to complete task
  • Number of errors
  • Number of navigations to manual
  • etc.

Usability can also be tested remotely (users test in their own setting, data is logged). Experiments aim to test an hypothesis where the experimenter controls the independent variable(s) to test the dependent variable(s). The other variables are held constant.

Different ways of experiment design:

  • Different-participant design (large randomly allocated group. Different participant perform in different conditions, need a large group to avoid bias)
  • Same-participant design (all participants in all conditions)
  • matched-participant design design (particpants paired based on characteristics, e.g. expertise)

In the wild studies

  • How do people interact with technology in a natural setting?
  • Evaluators want to discover how nw products or prototypes will be used. Explore novel designs.

Chapter 15

This chapter describes methods for understanding users through heuristics, remotely collected data or models. That is, methods without having to involve users in the evaluation step.

Heuristic evaluation

Experts evaluate interface through usability guidelines

Walkthroughs

Predict user problems by walking through a task noting problematic usability features. E.g. cognitive walkthrough.

Analytics

Evaluating user traffic through a system, e.g. web analytics (user activity such total visitors, traffic sources etc.)

Predictive models

Predict user behavior based on a model. For example Fitt's Law which predicts the time to reach a target with a pointing device

Questions

  • Which evaluation methods are feasible within our project?
  • Are the results valid? reliable? What about biases etc.
  • How should we balance settings involving users with evaluation methods without involved users?

Inga kommentarer:

Skicka en kommentar