Recently I had the opportunity to do a usability test for a client that had previously used another vendor to conduct their research.  As we were working together to develop screening categories for the study, we discussed the technological sophistication of their users. They knew that while some of their users were expected to have limited computer experience, by and large, their typical users were likely to have a reasonable amount of comfort using web and mobile devices.

Their prior vendor had recommended to them, in the name of “universal usability,” that they do not screen out those with limited computer experience in order to have a realistic sample.  The outcome of this decision was that those with very limited experience failed most, if not all, of the tasks because they didn’t understand the web. Since even the employment of usability best practices on the website would not give those participants that general understanding of the web needed anyway, however, the value of including these participants becomes questionable.

My client’s goal in conducting formative research both in the original testing and now was not to identify what percentage of the population could successfully use their site (certainly not possible with a limited sample size of 10).  Rather, it was to determine what usability problems would impact user performance.

Total failures don’t yield good data.

Employing usability best practices assumes a level of conformance with the standards that users have come to expect.  When users don’t have the base knowledge necessary to know what those standards are, observed issues may not mean standards were not followed.  Thus, the findings and the subsequent recommendations may not be as useful as they could be.

Total success may not yield good data either.

On the other hand, I’ve had clients who decide that they need to do their own recruiting among their user base.  Inevitably, no matter how much we talk about not having users that are intimately familiar with the product, there are some users in the sample, that are, in fact, expert users.  Expert users are useful if a client wants to assess whether sophisticated users will be able to re-orient to a new design, but they are not going to be useful in understanding the experiences of a user who has not had these prior experiences.

What level of sophistication is ideal?

My target minimum level of web sophistication would be someone who is comfortable on the target device, be it a regular computer or a mobile device (or in some cases, both).  When our target is to understand usability issues for new users, the participant should not be intimately familiar with a prior iteration of the resource, and they should not have in-knowledge that would bias the findings.

There are times, however, when it is useful to have the lowest level of participant sophistication.  Perhaps that is the typical user, not just the minimum-user.  Or perhaps the focus of the research is not on the usage of the technology but the understanding of the content.  In those cases, even if a participant does not understand web conventions and fails a task that requires them to find a particular piece of information, a test administrator could note the failure but then take them to the content and ask them a comprehension question that is separate from the use of the technology.

Other types of research: Focus Groups & Card Sorts

I’ve written before about the UX value of focus groups – what focus groups are good for and what they aren’t.  Since focus groups are often a discussion around user needs, determining user sophistication may be useful for background knowledge, but may not be necessary as a screening category.

Card sorts have participants sort a set of cards into categories in order to see statistically how the topics cluster into categories for the creation of or updates to the site organization.  With card sorts, participants do not necessarily need to have sophisticated technological knowledge, but they need to have a full understanding of a certain range of topical knowledge.  A set of topics that includes both financial information for employees and information for employers may not yield useful results of employer information from the employee sorts.  Even if the employee participants understand the general topics that employers deal with, they may not know enough to sort this information.  In cases where audiences only have a subset of knowledge of what is contained on the site, it may be advisable to have separate sorts with separate audience groups related to individual content that is most relevant to them (and these groups of content could contain some overlap).

Determine what user types will give the most valuable results.

Ultimately, when doing qualitative or formative research, develop a screener with an understanding of not only who the users are, but which of those user attributes are likely to produce participants that will lead to the most valuable findings on what to improve about the site.

If you end up having limited control over user sophistication, or you just find yourself with some participants that you see quickly aren’t going to lead to the kind of evaluation that you need, try to determine what you can learn from these participants and be prepared to ad lib a bit, or see if you can produce a separate activity on the fly that could still provide valuable data.

Image: RTimages / Bigstock.com