Classic usability studies involve a lot of hours to prepare, to conduct the research and to write up a robust report of findings. There is often a recruiting fee for someone to recruit the “right” participants, and participants often get paid, typically $75 to $100. In some cases, to get to the right audiences, travel is involved. Thus classic usability studies are often expensive. Because they are expensive, they don’t happen very regularly.

On the other hand, Steve Krug’s “First Friday” approach recommends doing a quick in-house usability study with only a few participants, have observers quickly identify key issues in discussion and do a quick-turn-around fix of those issues before the next usability study. It’s quick and cheap and can happen more frequently.

Classic usability studies, at least those trying to be as scientific as possible, stick to the script for all the participants. Sticking to the script yields a greater sample, which in turn leads to counts of issues occurring among many participants. The downside to classic usability studies, though, is that the usability researchers may have exhausted their usability budget for several months, until their next large-scale study.

The “First Friday” approach also sticks to the script but essentially breaks after only two or three participants, iteratively improves things, and then runs another few participants a few weeks later. High counts of issues occurring are generally not even a possibility.

I certainly do see the advantages of quick, efficient and cheap, but I also see the benefit of a more formal approach with greater participants. I’ve absolutely seen the benefit of a well-written (and time consuming) detailed report with time spent carefully reviewing the data; with issues written out and associated recommendations described in detail; with illustrations as screen captures; and with video highlights tapes to show stakeholders how key issues are really happening. With some clients, in discussions even a year later, I have found myself showing video clips from the past, and talking about previous findings and their relation to the current site and site issues.

I often have experienced a situation however, in formal usability tests, where after only a handful of users, a set of problems is so consistent and painfully clear, I feel that I don’t need to do that task anymore. In some cases, clients are very into numbers, the “how many had this problem” approach. But, I stress to them that this is qualitative research. It’s very rare to find things that are statistically significant.

So what should one do at that point? Continue with the test until the bitter end, likely finding the problem continuing? Or note that this is a very clear problem, and allow more time to do other tasks and to find other potential issues?

I had a situation recently during a usability test in Mississippi. The client and I composed more tasks than we had time to do. We designated higher priority tasks and lower priority tasks. The client made time to observe all the sessions, and after each session, we reviewed which tasks we felt had been exhausted and for which we had learned as much as we could, and which tasks could thus now be swapped in. We still started by laying out a formal testing/reporting approach, but were willing to be flexible enough to learn more while being less concerned with actual counts.

In general, I feel more comfortable with the more formal testing & analysis plan over the “First Friday” approach, but I would recommend to my colleagues to discuss in advance with their clients whether they are okay with this kind of flexibility. If possible, ask the client to observe the sessions live, either remotely or in-person, which comes with a whole host of other benefits and additional insights, as well as the potential for more on-the-fly flexibility and directional shifts during testing. Yes, it may be a few months until the next test, but if that test is also allowing for this level of flexibility, that may just be okay!