top of page
software testing services | see:detail logo
  • Chris Neville-Smith

Usability testing: when inexperience is good


Hand holding a phone

You did everything right. You designed your new software or website to do everything it was supposed to. Everything you need can be accessed in a logical fashion. You've seen to it that everything a user might want to do has been tested thoroughly, and it all works perfectly. You meticulously documented how to use the system, and if you created it for your workplace, trained your employees on how to use it. And what do get in return? Nobody uses it. Or people take to it, but keep doing it wrong. "It's too difficult", everybody says about your workplace system. People ignore your website and get what they want elsewhere. It's all so staggeringly unfair.


Welcome to the tricky world of usability, the one area of testing where being an expert can be a disadvantage. When you work in IT for a living, it's so easy to forget that things that are second nature to you might confuse the hell out of somebody else. Niggling issues that you work round in a instant can be a blocker to the wider public. Even managing a project without an IT background is no guarantee. You know the website you commissioned inside out, but the public don't. Something as trivial as terminology might be a big barrier.


An important clarification is that usability is not the same as accessibility. In testing, accessibility refers to ensuring that users with disabilities are not excluded from using it; usability, on the other hand, applies to everybody and is more about the ease of learning the system. There is a lot of overlap - indeed, poor usability can be an accessibility barrier for users with learning disabilities. But there is one important difference. There are numerous standards for accessibility for WCAG; follow those, and you can't go far wrong. For usability, however, there are no rules. It doesn't matter how much your product conforms to standards or common practice: if the people who are supposed to use it don't understand it, they won't it.


And this brings us on to the most unusual aspect of usability testing. Almost every other form of testing is best carried out by experts in that field, but the best usability testing specifically needs the input of people who know nothing about the test system. In fact, now is the right time to state the golden rule of usability testing: If you are blaming usability problems on the people who don't understand the system, you are doing usability testing wrong. It doesn't matter how ill-informed your users are: if they abandon your product because they found it too difficult, it's your loss, not theirs.


However, usability testing has a reputation for being laborious. Is it worth going to such expense? Should you just take your chances with customers giving up on you? The good news is that it's not a binary choice between expensive usability testing or nothing. There's quite a lot of options.


Your first usability testing choice: not if, but when


One obvious but important point is that all software gets usability tested somehow. If you do nothing, the usability testing is by default what happens when your product goes live: either your users get on with it, or you swiftly find out otherwise. And this might be fine if you're confident your users will take to it, or it won't be a big deal to put things right if there's a problem. A spreadsheet with a few macros would be a good example of something low-risk, that probably needs no organised usability testing before launch.


One important consideration is who your intended audience is. Systems created for internal use within a workplace are generally less pressing than systems for the public, because you can train your workforce. Don't look on training as a substitute for usability though; if you create a system that flies in the face of all intuition, training probably won't be enough to save you. It is also worth considering what the consequences of poor usability will be. Most users on, say, a gaming site message board should be tech-savvy enough to cope, and it's only of minor consequence if some people find it too complex and give up. If it's a website that gives vital healthcare-related communications to an elderly population, however, you really do not want to just release it to the public and hope for the best. You also don't want potential customers finding a competitor's website easier either.


Should you need organised usability testing, there are two important principles:


  1. If you are going to do organised usability testing, the earlier, the better. There is no knowing what issues you'll find with usability. So issues might come down to fundamental design. A rethink in design is feasible if you detect the problem early enough; but if you leave it until last, you'll have a choice of a costly/embarrassing redesign and delay, or releasing anyway and putting up with the consequences. However ...

  2. Even if you should have thought about usability sooner, it is never too late to do usability testing! Good usability testers will be pragmatic with the solutions to whatever issues they uncover. Should they find an issue that ideally warrants a rethink in the design, but the rethink is no longer an option, they will probably have other ideas for how to address the problem, or at least mitigate it.


So if you're going ahead? It's a big simplification, but there's two main ways of doing usability testing. One is best suited to early testing, another is best suited to late testing, but there's some versatility in what suit you.


Approach 1: In-person testing


Three people around a computer

Also known as "usability labs", this is the older of the two methods - and indeed, when some people talk of usability testing, this approach is always what they mean. It's a very simple and very low-tech method: just get somebody with layman's experience in, ask this person to complete some tasks the system is designed to do, and see what happens. In principle, you only need a user and a facilitator, but a more organised test might record the session (both the user and the actions on the system). It is even known to have observers watching behind a one-way mirror.


The most obvious challenge for the facilitator is to avoid being a helper. It's second nature for anyone proficient in IT to say to someone with a computer problem "click here" and "click there", but there's no guarantee the user would do that without you. It's fine to prompt a user who's stuck, or gone down the wrong route - but make sure you've noted what went wrong first, and try not to make this the norm. One way of looking as the facilitator's job is that you have to be just as much of a psychologist as a tester. Is the user making low-level snarky comments throughout the process? That could be your only warning that users in real life will swiftly lose confidence in the system.


The obvious drawback? Usability labs are a very expensive way to get feedback from one person. You will almost certainly need to pay for both the facilitator's and user's time, and if you want to test multiple users, the costs add up. There is a general consensus that five users is the maximum useful number of testers; this is based on experiments which concluded that almost all the issues were detected by the fifth time, with subsequent users only highlighting issues found by previous users. You can, however, get a lot of information out of those five users, observing better than any remote test not only what goes wrong but why it goes wrong. Should your test users misunderstand how they were supposed to use the system, you can ask there and then what the confusion was.


Usability labs are probably best utilised early in the development process, as soon as you have a preferred design and a working prototype. It doesn't matter too much if there are gaps in the system at this stage - it's fine for a facilitator to, say, ask the user to take the payment process as read. What you really want at this stage is to see how the basic design works out. This is your first opportunity to see if it's as user-friendly as you hoped. If it isn't, you have plenty of time to change the basic design without it causing anything to break. And if it works fine, you can go ahead and develop the rest of the system with more confidence your users will take to the final product.


Approach 2: Remote testing

Person sitting outside using a laptop

For most people, the commonest form of remote testing is the messages that pop up on websites asking you to rate your experience. That's a perfectly valid method, and it's in common use because it's easy to do. However, there are limitations to this model.


The response rate will be tiny. You can still get enough responses to do some analysis - however, self-selecting samples usually gravitate towards people who want to say either how great it is, or how terrible it is. People with middling views tend not to take part. This means that if you're serious about checking usability, you should do something more organised.


With a suitable usability testing tool, you can set users tasks and see how they get on. (We've used Loop11 for web usability and are quite happy with it, but there are other equally good tools out there.) The big advantage with this is that you can get a much bigger sample than usability labs can manage, and with that, get some idea of the proportion of people who understand the system. You might uncover a problem, but how many people are being stopped by this. One in fifty, or one in five? That might tell you which issues need prioritising, and which ones you can let go. Tools such as Loop11 can also compile which pages users navigated through and where people clicked, giving a lot of information on where things are working and where people are getting stuck.


User tasks, I think, beats asking users about their experience hands down, but you can of course do both. If the majority of users complete a task successfully but still say they find it difficult, something is going wrong that needs further investigation.


There are, however, a couple of challenges to remote testing, neither of which can be solved by test tools.


Firstly, there's the challenge of getting enough participants for a meaningful sample. The feedback form on a web page is no good here. Apart from the issue of a self-selecting sample, there's next to no chance somebody's going to suddenly spend half an hour doing some usability tasks for you. Exactly how you do this depends on what you have access to. If you're lucky, a call-out to an e-mail list might get you enough volunteers. Or you might need some incentive such as a prize draw. Alternatively, you could ask staff at your own organisation to do this (provided they're not the devs). For example, a school website aimed at parents could be tested by school staff; that will probably be an acceptable substitute. Although you'll need to treat this with some caution; terminology that your employees are used to might be unfamiliar to the wider public.


Secondly, you need to think very carefully about the tasks you set, and how you word them. You need to choose tasks that will unambiguously pass or fail. More importantly: your tasks must be clearly understood. In usability labs, you can correct users who misunderstand what you asked them to do; in remote testing, however, that's it, the test run is useless. Also bear in mind that you can only set a finite number of tasks before they get bored and stop doing this. Our experience is that you can manage around ten tasks without a significant drop-off; after that, you're pushing your luck. Choose your tasks well.


Remote testing tends to be better suited to late testing, against a product that is about to be released or has just been released. Testing against alpha-quality software usually requires workarounds for glitches and incomplete features; experienced devs and testers will know how to work around those problems, but that defeats the object of usability testing.

The obvious weakness of late testing is that if you do find something wrong, it's unlikely you'll be able to make any substantial changes. Your options are probably going to be limited to minor content changes in the short term; design considerations for the next update in the medium-long term. But the better you know what your users understand and what your users struggle with, the better prepared you will be.


What about User Acceptance Testing?

Three people sitting around tables with laptops

User Acceptance Testing has become a bit of a vague term, and to some extent, has also become a bit of a misnomer. There was a time when UAT meant testing specifically carried out by the users who will be using the test system in live; it's now often carried out by dedicated testing teams who bear in mind what the business users would be doing. UAT is generally focused on functional testing - there's no rule that says usability testing can't be part of UAT, but I've never seen this included. Do not confuse UAT and usability testing; they are two very different things.


However, UAT can still be a useful way of assessing usability - indirectly. Testers at this final stage of testing should be considering not only what the system is meant to do but how people in the real world are likely to use. A product might meet its spec perfectly, but the testers might still pick up flaws that stand in the way of what the product was meant to do. Sometimes it might be errors or oversights in the spec itself, but testers can also notice usability problems.


True, professional testers aren't IT novices, and are just as prone as the devs to assuming something easy to them is easy to everyone else, but they are at least a different pair of eyes. If the devs think something is easy to use but the testers struggle, that's a warning sign. If they can't get their head around your system, what chance does a layman have? Testers also move from project to project and get a good idea of what's common practice. And as far as usability is concerned, common practice is usually good practice.


UAT has a similar problem with remote testing: it's done last, meaning it doesn't leave the project much time to fix problems. Last-minute bug fixes are one thing, last-minute design changes are another. Except ... it doesn't have to be this way. If you start planning acceptance testing at the outset, tester planners can notice design problems and usability issues that both developers and business analysts missed and nip it in the bud. You should be doing this anyway, but enhanced usability is a bonus of starting acceptance test planning early.


UAT should not be viewed as a replacement for usability testing; if you want to know for certain how real users cope when they do their own thing, you'll have to try it out and see. But if this isn't an option, some informal testing in the process of UAT is the next best thing.


And finally ...

The thing that must be avoided at all costs in usability testing is getting defensive, whether you are a developer or a designer. There is good practice, but good practice does not guarantee good usability. Nine times out of ten, problems found in usability testing is nobody's fault - just the way things turned out. You can never fully predict how non-technical people take to technical products. We've often been surprised by the results of usability tests. Honestly, you can set a task to find a certain web page which is literally linked from the middle of the home page, and people still get lost. So don't look for blame, look for solutions.


And on a related note, usability testing is one of the most important forms of testing to do independently. There are always good reasons why it's better for software testing to be done independently (except maybe unit testing), but it's especially true for usability testing. Even if the test users are new to your system, it's very difficult for anyone involved in programming or designing the system to avoid steering the users to the right answer. But it's not just about not marking your own homework. Experienced testers can help you make the most of what you have. If you are doing in-person testing or remove testing, they can help you choose the right tasks and right questions to learn as much as can. Even if you're not doing dedicated usability testing, independent testers who go from test system to test system can give an informed outside perspective.


At see:detail, we can help with usability testing that's right for you. We will help you make the best of whatever resources you have, and work with you to achieve what you need. If it's dedicated usability testing, we can help you target the questions to get the maximum information on your priorities. In other kinds of testing, we can advise you on good practice to keep your site user friendly. If you are interested, please contact us with your needs.




11 views0 comments
bottom of page