Personal tools

Convergent Usability Evaluation, p. 6

Continued from p. 5

Discussion: Convergent Evaluation

Problems recorded
The problems recorded in EIRS ranged from the shocking to the mundane. Many voters simply couldn't find their polling place.

Initially, the opportunistic, multi-pronged, low-impact approach we used to assess and improve EIRS’ usability seemed like making the best of an undesirable situation. In retrospect, we feel that the potpourri of evaluation methods we used yielded results that compare favorably to relying on more formal methods.

It certainly was less expensive. We worked as volunteers, but even had we been paid for our time, one formal usability test would have consumed more hours – and cost more – than did all of the methods we used combined.

Beyond cost considerations, using multiple evaluation methods allowed us to see potential usability problems from different perspectives:  our own, that of trainers trying to teach EIRS to others, that of trainees trying to understand EIRS, that of users in the midst of fielding problem-calls, and that of seasoned users reflecting on their experience.  This provided important insight into which usability problems were more serious, and which problems were best remedied by redesign vs. training.  Examples of how the evaluation methods complement each other include:

  • A browser-compatibility problem seen during an expert review was dismissed as low priority until it also was seen in a training session and Q/A testing.
  • Reviewers of the incident forms questioned the need for certain fields to be “required”.  We later witnessed the trouble such fields can cause for users in the hectic call-center environment.

  • Some call-center managers complained during training that giving unique logins to each user was problematic. Unfortunately, implementation of the account-creation process overlapped training, so fixes to the account-creation system were incorrectly believed to address the problems found during training. A subsequent review of EIRS’ account-creation functions found problems, but none seemed serious enough to prevent managers from registering every user.  The gravity of the problem was only recognized on election day when call-center managers refused to register volunteers, instead using generic logins. Worse yet, some call centers were non-operational for a time, because some of the accounts for managers, the only on-site accounts powerful enough to create new accounts, did not function correctly. Fortunately, EIRS did not crash and is not known to have irretrievably lost or corrupted data. In retrospect, it’s clear that managers were extremely busy during the start-up of call centers. This was the worst possible time to learn a complex, seldom-used action in the system.

  • Participating in Q/A testing primed us to spot, in our field observations, important usability problems that were hard for users to describe or that appeared only occasionally.

It is a fair question whether our findings were as valuable as those that more conventional methods might have produced.  Some argue that discount methods are less effective and must be fortified to improve their value [5]. Perhaps, but we feel that combining methods as we did provided significant mutual fortification.  Certainly many important usability problems in EIRS were exposed and corrected, and many more were identified for future revisions.  It is also worth noting that a usability test would not have caught some of the problems we found, such as the fact that most call-center managers prefer generic volunteer accounts over giving each volunteer an account.

In any case, discount methods were all the project could afford.  Therefore, comparing the efficacy of discount vs. conventional methods is moot in this case. The real alternative to what we did was to do nothing, and simply hope EIRS would be usable on election day. Without question, our methods were better than nothing.

The low-cost, low-impact usability evaluation methods used in the EIRS project shed light on each other.  They did not simply add like using more experts in a heuristic evaluation.  Because of their different perspectives, they converged, yielding a total evaluation that is greater than the sum of its parts. We therefore refer to using these methods in concert as convergent usability evaluation.

-> References


Created by hdihuyen
Last modified December 22, 2005 05:48 PM
Announcements

Sign up for CPSR announcements emails

Chapters

International Chapters -

> Canada
> Japan
> Peru
> Spain
          more...

USA Chapters -

> Chicago, IL
> Pittsburgh, PA
> San Francisco Bay Area
> Seattle, WA
more...
Why did you join CPSR?

Should have done it a long time ago! But: now, more than ever.