10 Best practices for doing Software Usability testing
September 22, 2009 10 Comments
I have been doing software usability testing for the past 10 years or so and here are 10 rules I go by every time I do a usability test. I have taken the liberty of calling it “best practices” only because these work for me really well. Take that as one man’s opinion if you want, since your mileage may vary depending on your product or service.
Rule 1: Check your ego at the door before you sit down with your participants. The objective is to figure out if the software behaves the way “real users” expect it to. It is not a test of the users. And believe it or not, your software will NOT always behave your participants would expect it to behave. Do not get ticked off if the user slams your product or the new feature s/he is being asked to test – no one cares how much time you and/or your development team put in to build it, all that matters is if the user can do what s/he wants to do the way s/he expects to.
Rule 2: Make the user comfortable at the start of the test. Why? Because it is difficult not to be nervous when you have someone watching over your shoulder while you are trying to figure out this half-baked, buggy new piece of software that you may have never used before. Chit chat, get the user some coffee or water. Do not dive right in.
Here is what I tell my participants before we start the software testing. “I want to make sure you understand that we are testing the software “usability” and not you the “user”. There is nothing and I mean NOTHING that you can do wrong in this test. If you don’t like something tell us and tell us why. If you like something tell us and tell us why. It is important for us to know both so that we don’t ruin what you like while trying to eliminate/fix what you don’t like. What we are trying to determine here is the deviation between how you expect to do certain tasks using our product and the way the product actually behaves. I would really appreciate if you could talk out loud what you are thinking while you are try doing the tasks you are about to do.”
Rule 3: Plan the test. Have a written down script for the tasks the user is supposed to do. Don’t change the script from one user to the other and importantly, stick to the script.
Rule 4: Forget about recording the audio/video of the test. Don’t listen to the usability pundits who tell you to do this. For the same reason, I have never spend a dime in renting professional usability testers or the facilities with the one way mirrors etc. Here are my reasons why I do not recommend any of this:
- It is expensive – read – $$$$$ and time consuming. I just cannot justify the extra return on investment compared to doing it myself.
- It is time consuming to set up and one more thing that can go wrong. I have tried using tools like Morae and even they introduce unnecessary complexities and modes of failure.
- I like to keep it very simple. And to me, nothing comes close than having a simple setup with my users. After all, it is my job as a product manager to make sure my product behaves the way the user expects it to.
- Getting the user not to be nervous without doing all of this is difficult in the first place. Their nervousness level goes up even higher if they find out that whatever they say or do is being recorded or they are being watched by folks behind a one-way mirror.
- No one is going to watch or listen to the recording – do you really think your product team has the time? – putting together the test is hard enough. I would rather spend time figuring out how to resolve the issues found.
So what do I use – the good old pen and paper. I take so many notes while the user is trying to do. Talking, writing and observing is hard to do all at the same time. So this helps you reduce the “talking” part. The main point of the testing is to observe.
Rule 5: Never ever take control of the mouse. Usability testing is not a demo of the brand new widget you have slogged on to create. So if the user cannot figure out, it is not good. After all, when the product ships, you are not going to be there to help the user figure it out. The objective is simple – “Can the user figure it out on his own and does it behave like s/he expects to. If not why not?” On the other hand, this does not mean that you have to leave the user to be lost, frustrated and feeling helpless if he cannot figure it out. Give it some time and if the user still cannot figure out, provide some hints to see if it helps. If it does not, note it down as a serious issue and ONLY THEN show the user how it is supposed to behave. But again, make sure you understand that you may have a serious usability issue.
Rule 6: Never do the usability test alone without announcing it to your product team. Announce it, inform them about the test schedule and make sure you get one person from your product team to help you during the testing. But keep it to one additional person – so that you don’t make the user nervous. My choice for that one person – the developer who worked on this new thing that you are testing. This extra person can help you take notes, can observe it from his perspective etc. Plus, it is one more person who is witness to the test such that later when you present your findings to the team, you do not have to deal with the “data you collected” being dismissed as “your opinion” or because of the incorrect testing method used. Trust me, this happens.
Rule 7: Make the test no longer than 45 minutes. If it is anything longer, you will see a significant drop in feedback from the user. It is very difficult to work while the user is nervous for more than 45 minutes. How do you keep it this short? Keep the number of tasks the user has to do no more than 2 or 3 that can be easily completed in 20 minutes or so. This will take 45 minutes to complete during the test because of the Q&A and discussions.
Rule 8: Never make any changes to your product unless you have convergence from multiple tests. Don’t run to your development team after one test and ask them to make changes. Wait till you have more data points from more tests before you see a pattern of the issues that are tripping users up. How many tests do you need to do? I recommend at least 5 data points.
Rule 9: Analyze the data After all the tests are completed, read through your extensive notes and process it along with the colleague who observed the test with you. What did the users really like? (make sure your preserve this). What are the things you need to fix? What should be the action plan to resolve these issues. Do it right after all the tests are completed while it is all fresh in your mind.
Rule 10: Act upon the data, execute your action items. Make sure your stakeholders know about the results and what you learnt. Make sure that they know your product team participated in the effort and learnt a lot from it. It is not “your” show, but make sure they understand that it was a “we” show and the product team is on-board to make the changes. Then make the changes and repeat the tests if needed.
Note: I have had a lot of success doing usability testing using web conference tools such as GotoMeeting and have successfully applied these same rules.
Your thoughts from your experiences – are there any other best practices you have learnt along the way? Do you agree or disagree with the above? Please share via comments.
If you enjoyed this post, please consider leaving a comment or subscribing to the feedto receive future articles delivered to your feed reader.
Image: Courtesy of Cadfanatic.com