Written by Jerry Cao

Usability testing is a fundamental aspect of the UX design process because it allows designers to understand how their product interacts in the eyes of their users.  It aids in the principle of user centered design and steers away from obsessing over features users are likely not to use.  For best results, it’s best not wait to test for usability at the very end of development.   It’s encouraged to start usability testing right away once a basic prototype has been implemented as important features of the design can arise that were not considered in the initial planning.  Jerry Cao breaks down a process of 6 easy to follow steps for insightful usability testing.


Usability testing makes the difference between design thinking (designing for the user) and obsessing over features.In this piece, we’ll outline the 6 steps to running an insightful usability test:

  • Define Goals
  • Choose the Test
  • Create User Tasks
  • Write a Research Plan
  • Conduct the Test
  • Draft Up a Quick Report

Let’s get started.

1. Define Goals

The first step to any successful usability test is defining your goals. This could be broad, such as:

Which checkout methods are most intuitive to our users?

Or specific, such as:

Which form design works best for increasing e-commerce purchases?

Naturally, you’ll have a lot of questions about your product, and this curiosity is good. However, remember to limit each test to only the most relevant issue at the moment. Each test should have a central focus for the most accurate results — the more objectives you test at once, the more room for error.

As David Sherman mentions in his article on usability testing, the answers to these questions will be your test’s hypothesis.

You can generate hypotheses simply by setting aside time, for your and your team, to try to answer the goal questions on your own.

2. Choose the Right Test

It’s not about knowing which tests work and which don’t, it’s about knowing which will work for a specific need.

In the free Guide to Usability Testing, we divide the tests into four categories based on Christian Rohrer’s fantastic article:

  • Scripted — These tests analyze the user’s interaction with the product based on set instructions, targeting more specific goals and individual elements. (tree testing, hallway usability tests, benchmark testing)
  • Decontextualized — Ideal for preliminary user testing and persona research, these tests don’t necessarily involve the product, but analyze more generalized and theoretical topics, targeting idea generation and broad opinions. (user interviews, surveys, card sorting)
  • Natural (or near-natural) — By analyzing the user in their own environment, these tests examine how users behave and pinpoint their feelings with accuracy, at the cost of control. (field and diary studies, A/B testing, first click testing, beta testing)
  • Hybrid — These experimental tests forego traditional methods to take an unparalleled look at the user’s mentality. (participatory design, quick exposure memory testing, adjective cards)

Once you determine the type of usability test(s) to run, you should send out a descriptive announcement to give your team a heads up. It’s even more helpful, in fact, if you summarize your tactics with a quick planning document.

3. Create Your User Tasks

Everything you present to your users during the test — both the content of the question/task, as well as the phrasing — impacts how they respond.

Tasks are either open or closed, and your tests should incorporate a healthy mixture of both:

  • Closed — A closed task offers little room for interpretation — the user is given a question with clearly defined success or failure (“Find a venue that can seat up to 12 people.”). These produce quantitative and accurate results.
  • Open — By contrast, an open question can be completed in several ways. These are “sandbox” style tasks (“Your friends are talking about Optimal Workshop, but you’ve never used it before. Find out how it works.”) These produce qualitative and sometimes unexpected results.

image03

Source: “A Five-Step Process for Conducting User Research.” David Sherwin. Smashing Magazine.

Read Tingting Zhao’s piece for more advice on optimizing tasks.

As for the wording, be careful to avoid bias. Just one wrong word can skew results.

For example, if you want to find the most natural ways in which users browse your online shop, writing a task like “It’s 10 days before Christmas and you need to search for a gift for your mother,” might lead the user to use the search functional, as opposed to their normal method of window clicking.

4. Write a Research Plan Document

Modified from Tomer Sharon’s One-Pager (fantastically helpful yet lightweight), the research plan document we use at UXPin is a formalized announcement with all the necessary details of the testing.

You want to hand your team a slim document around one page to encourage them to actually read it.

While keeping things brief, you’ll want to cover at least these 7 sections:

  • Background — In a single paragraph, describe the reasons and events leading to the research.
  • Goals — In a sentence or two (or bullets), summarize what the study hopes to accomplish. Phrase the goals objectively and concisely. Instead of “Test how users like our new checkout process,” write “Test how the new checkout process affects conversions for first-time users.”
  • Questions — List out around 5-7 questions you’d like the study to answer
  • Tactics — Where, when, and how the test will be conducted. Explain why you’ve chosen this particular test.
  • Participants — Describe the type of user you are studying, including their behavioral characteristics. You could even attach personas (or link to them) for more information.
  • Timeline — The dates for when recruitment starts, when the tests will be expected to take place, and when the results will be ready.
  • Test Script — If your script is ready, include it here.

Check out Sharon’s sample One-Pager to see how it should look.

Encourage your team-members to give suggestions or advice so that the test results are helpful to everyone. Find out the questions that they want answered as well.

5. Conduct the Test

After gathering feedback from the team, you’re ready to actually conduct the test. This involves recruiting the right participants, scheduling times, and writing the actual test documentation.

image00

Photo credit: Free Usability Testing Kit

Jeff Sauro, founder of Measuring Usability LLC, lists 7 methods for user recruitment, including online tools. In our experience, we’ve found hallway testing and tools like UserTestingincredibly helpful.

As for your role during the actual test, sometimes you must make the choice between being present (moderated) or allowing the user to work on their own (unmoderated). Additionally, you can also choose to conduct your test on-location or remotely.

  • Unmoderated — Unmoderated tests are cheaper, faster, and generally easier to recruit and schedule. They also remove the influence of a moderator, leading to more natural and less biased results. On the downside, there is less opportunity for follow-up questions or supporting users who go astray during tests.
  • Moderated — While costlier and requiring more effort to organize, moderated tests allow you to “lead” the user, for better or worse. Moderated tests are recommended for rougher prototypes (higher risk of bugs and usability issues) or incredibly complex prototypes (users might need some clarification).

While every test has different qualities and best practices, the following advice works across the board:

  • Make users comfortable — Remind them you are testing the product, not their capabilities. A test script helps ensure you hit upon a few reassuring points in the beginning of each test.
  • Don’t interfere — This avoids bias, and may reveal insights into user behavior you hadn’t predicted. The best insights usually come from when a user isn’t engaging with the product the way it’s designed. Pay attention to workarounds and let them inspire feature improvement.
  • Record the session — This makes a solid reference point for later, when interpreting the results. If you’re running the test through UXPin, you can record data like facial reactions, clicks, and all audio.
  • Collaborate — Tomer Sharon suggests creating a Rainbow Spreadsheet to allow everyone to record their own interpretations for quick comparisons later. We used his spreadsheet during our Yelp redesign exercise and found it was very helpful for summarizing results for designers and stakeholders.

image01

Source: Based on exercise suggested by Tomer Sharon

6. Draft Up a Quick Report

The usability report is the way to share the results with the team, so that everyone’s on the same page.

To best organize and make the results readily available, we suggest creating a cloud folder with universal access.

As you write the report, keep the following tips in mind:

  • Avoid vagueness – Mentioning that “Users couldn’t buy the right product” isn’t very helpful since multiple factors might be involved. Perhaps the checkout process was difficult, or the product listings were hard to browse. Explain the root of each issue from an interaction design and visual design perspective (e.g. confusing layouts, a checkout process with too many steps, etc.). 
  • Prioritize issues – Regardless of how many issues you find, people must know what’s most important. We recommend categorizing the report (e.g. Navigation Issues, Layout Issues, etc.) and then adding color tags depending on severity (Low/Medium/High). List every single issue, but don’t blow any out of proportion. For example, don’t say that a red CTA button lead to poor conversion if the steps of the checkout process don’t make sense.
  • Include recommendations – Don’t include any hi-fi prototypes or mockups in the usability report, but definitely suggest a few improvements. To supplement written suggestions, our own UX Researcher Ben Kim also links to lo-fi wireframes or prototypes in a UXPin project dedicated to usability testing.

The usability report should be a folder, not a single file. Don’t forget to include things like:

  • Formal usability report
  • Supporting charts, graphs, and figures
  • Previous testing documentation (i.e., the list of questions the user was asked)
  • Videos or audio tracks of the test (which is why it’s good to record sessions)

The documentation is just the starting point. Schedule a follow-up meeting with the team to review the usability report and relevant data, discussing issues and the outlined recommendations.

Conclusion

Don’t wait until the end of the project to conduct your usability testing. Once you have a lo-fi prototype, start testing. The data is less about validation and more about inspiration: test early, and test often, so you can actually put the results to use before it’s too late.

To get started on your next usability test, download the free Usability Testing Kit created by our CEO Marcin Treder. The kit includes 5 templates for planning, running, and documenting your usability test.