A long-term Lebsontech client asked me to report on how often my usability recommendations were followed. The truth is, I didn’t really know. I had done research on a number of different sub-sites for this client and produced a ton of reports. While I had certainly kept tabs on some of those sites, in other cases, after reporting out on the findings, I hadn’t spoken with those particular stakeholders again or gone back to their particular sub-site. In a few cases, the site required a password, and I was only given a temporary password for the duration of the test.
I offered to create a matrix of all of the recommendations that I had given over the last three years and to do my best to assess whether those recommendations had been carried out.
Creating the Matrix
In order to create the matrix, I started with a spreadsheet that included columns for expected primary audience group; activity; report name; web pages being evaluated; specific subpages identified; section of the report; short description of issue; and status (classified as pending; addressed or superseded so no longer relevant; not completed or “wish list.”) I ended up with a spreadsheet with 495 rows of data from the usability test reports and 1295 rows of data related to heuristic reviews (reviewed by usability specialists but no actual or representative users involved) and then created a number of cross-tabs in Excel to see which recommendations were followed and which were not.
Findings: The Numbers
In terms of usability studies, about a third of the recommendations were already addressed or sometimes superseded. Another third (particularly those that had a more global impact), I was able to identify as in queue to be updated within the next few months. I had classified about 10% of the recommendations as “wish-list” recommendations, really great new value-added features, but not things that are easy or simple to implement. Another quarter of the recommendations were not currently on the table to be updated.
In terms of heuristic reviews, about 30% of the recommendations had been addressed and 15% were in queue, but over half of the recommendations showed no evidence that they were going to be addressed.
While some of the not addressed recommendations were not as urgent, others were very significant or were quick fixes that could be done in no time at all. Some of the issues that were not fixed included pages that don’t work right in the current version of Internet Explorer, a form where accidentally adding an additional space in a field actually crashes the website, information that is out of date, and spelling errors.
What the findings really mean
What was very interesting was the relationship at the time of evaluation between my team of usability researchers and the specific stakeholders responsible for updates to any given sub-site.
The usability tests require a higher level of initial input from the specific stakeholders. There are more questions beforehand and more discussions with the usability team during and after testing. The report on findings therefore seemingly carries more of a weight when we identify problems that consistently occurred across real people.
The heuristic reviews often were requested by the higher-level client stakeholders with only limited buy-in from the specific sub-site stakeholders. Often, there were no formal presentations of the heuristic review findings. When there actually was a higher level of involvement from the sub-site stakeholders, recommendations from heuristic reviews were much more likely to be taken seriously.
Lessons Learned
This was a great exercise for me that demonstrated a number of important points about increasing the ability to effect change:
- Even if those in charge of a site wish to engage with the usability specialists directly, make sure to also engage with those at all levels of the organization who would be involved in any changes.
- Try to get buy-in from all levels, not just those high-up.
- Always do a presentation of findings with those both high-up and further down the chain of command. Engage at multiple levels and make the presentation in-person if possible, for a better connection.
- Consider asking early-on how your recommendations will be used. Build what you learn about this into the findings report, presumably in a next steps section, and make sure to reiterate this when meeting with the client.
- If budget allows, consider proposing a video highlights tape, particularly if you are concerned about recalcitrant stakeholders. Seeing what really happens in a quick and powerful video can be very useful in making points.
- If possible, consider checking back after an appropriate length of time to find out what has and has not been updated. Do not necessarily flag every recommendation that has not be taken, but if anything stands out as particularly problematic, consider sending a note to the stakeholders and pointing out these continuing issues.
Final Thoughts: Should they take all recommendations?
A report from a usability study or a heuristic review is intended to identify issues that could impact the user experience and provide possible solutions to remediate these issues. While it would be flattering if they took all of the recommendations, it is not necessary or expected. There can certainly be alternative solutions to any given problem (which, ideally, stakeholders would run by the usability specialist to confirm that new problems are not likely to be introduced.) Additionally, it should be generally understood that although issues may be identified, sometimes they are baked into the backend software and can’t be changed, or require such a level of effort to fix, that they really need to wait until the next “major” release, which could potentially be a year or more away.