SKIP TO CONTENT

Tips for a Successful Remote Usability Test

Chris Capuano
March 10, 2023
5 min read

The simplest way to know if a product or service is working is to put it in front of the people who use it the most. Sitting down with end-users to test products is vital for designing intuitive user experiences. This can be intimidating if you have little or no experience doing it. In this blog post, I want to provide you with some simple tools and techniques to get you started or improve your approach to usability testing. We’ll walk through a quick example of how we designed and executed a usability study for Centers for Medicare & Medicaid Services (CMS) Accountable Care Organization Management System (ACO-MS).

The Softrams team is a partner in the building and maintaining of CMS’s Accountable Care Organization Management System. Accountable Care Organizations (ACOs) are groups of doctors, hospitals, and other health care providers. The ACO program seeks to provide high quality care to patients, reduce duplicative medical treatments, and pass the savings seen through gained efficiency to its members. The Softrams team was asked to redesign the ACO-MS knowledge library and help desk pages. The legacy designs had usability challenges and were not scalable as more content was introduced to the platform. To understand the challenges that existed with the current platform, we had several in depth conversations with the ACO-MS sponsors at CMS. These folks are seasoned civil servants who know the ACO program inside and out.

This leads us to our first tip, understand the difference between your end-users and your project sponsors. In some instances, end-users and sponsors can be one in the same. For the sake of this blog post, end-users are the people who use the product day in and day out. Sponsors are the supporting cast that make a product or service come to fruition. Sponsors pay the bills, write the code, support the product and user, etc. Before beginning any project, it is important to clearly understand all the stakeholders involved and what motivates each of these groups or individuals. For this redesign, our CMS sponsors were interested in creating a better help desk and knowledge library product for their ACO coordinators so that there was less demand on CMS government personnel. Being constantly asked questions by coordinators takes CMS personnel away from doing other important work. Our end-users’ motivation was the desire for a better experience in finding resources faster and reducing the number of times they needed to reach out to their coordinator.

Once we understood the motivations of our end-users and sponsors, the design team generated personas to capture the profiles of our end-users. A persona can be a simple card that provides basic details on your end user: their name, age, actions, motivations, and pain points. Personas help the entire team understand and keep focus on who we’re building a product or service for. This is very helpful when prioritizing and evaluating our work. We can frame the conversation with the design, development, and project management teams around the user: “Would Johnny Persona’s experience be made worse if we eliminate this feature?” Communicating with our project teams in this manner helps to reduce internal biases and keeps the product focus on the end user.

From there, the team began exploring a first round of prototypes for the Knowledge Library and Help Desk pages. We demoed the draft prototypes to our CMS sponsors who encouraged the design team to begin testing with end-users. This brings us to our second tip; if possible, test as early as possible in the design process and work with your sponsors to identify test participants who represent your end-users. Some teams wait up until the point where a design is almost complete before testing which makes it more difficult to adjust and pivot. I am a proponent of testing as early as possible to get feedback from users and to improve the product or service. This can even start with walking users through some hand drawn sketches!

An early iteration of the redesigned ACO-MS Knowledge Library.

Once you have a prototype ready to test, now it’s time to plan how you want to test with your end-users. I have been a fan of balancing a test script with open conversation during a usability test. For example, plan to test the navigation area but don’t completely prescribe the questions you will ask. Instead, bring the users attention to it and ask them an open question on how they would complete an action. Continue to probe as necessary and follow the conversation where it goes to uncover a design’s strengths and flaws.

  • Closed Question: “Are these all of the items that you would expect to see in the navigation or would you want a home button, back button, etc.”
  • Open Question: “What are some actions that you can complete with navigation? Is this what you would expect?”

Our team has used a basic Microsoft Word document hosted on the cloud to interactively conduct user testing. While one person is leading the test, they place their cursor on the question/theme in the discussion guide. This queues the notetaker as to where they should capture notes from the conversation. It is super helpful to create a discussion guide before you begin testing.

Here's a section from our Usability Testing Guide.

Once you have a general plan laid out for what you want to test with your users and how you plan to flow through it, now it’s time to reach out to your users. For this redesign, we worked closely with our CMS sponsors to provide us with names and contact information for end-users who we could test with. It is best practice to work with your sponsors, especially in government projects, to provide you with end-users to test with. Once we received the names of test participants, we drafted a message that introduced ourselves, provided some context for the project, laid out expectations for testing, and offered a call to action to sign up for a testing timeslot.

Here's the message we sent out to users.

Clear communication is important here! You want to ensure that your users know exactly what to expect from testing. You also want to make it as easy as possible for them to sign up for a day and time that is convenient for their schedule. We used a cloud-based Excel sheet to make this process as easy as possible.  

All the things that we mentioned up until this point will get you 75% of the way towards a successful usability test! Now that you have a plan in place and test participants identified, you and your team should meet and conduct an internal dry run of the usability test. You can either practice on each other or solicit a volunteer from your organization who you can run the test with. This is a great way to work out the kinks and ensure that you are testing everything that you need to prior to testing with real end-users.  

When test day arrives, talk with your team beforehand and identify who will be leading the test and who will be notetaking. When it’s time to start testing, make sure that you show up to your meeting several minutes before the scheduled start time. If your user joins early, engage them with casual conversation. Once all members of the test have joined the call, go around the room and have everyone introduce themselves. Reiterate the context of the testing and what it is that you will be testing today. Encourage them to be relaxed and candid, emphasizing that you are testing the product itself and not the user. Mention that there are no wrong answers nor bad questions and encourage them to think out loud as they are using the product. It is also incredibly valuable to record these sessions so make sure that you ask your participant if they are ok with this and build a cue into your discussion guide so that you do not forget.  

During each of our testing sessions, we started off with a quick set of demographic questions. These included the user’s name, company/organization, role, location, and years of experience in their role. We also asked them questions about the current system – what works well and where there are opportunities for improvement.  

From there, we began testing our new design. We sent them a link to a clickable Figma prototype, had them open it, and share their screen. We started by drawing their attention to certain elements that we wanted to test and followed our discussion guide. Again, our guide was structured so that we had several areas that we wanted to focus on and drew our user’s attention to those features. We asked them to complete several tasks including finding resources in the Knowledge Library and submitting a help desk ticket. As we observed them completing the tasks, we probed when things seemed difficult asking things such as “Was that how you expected it to work?” and “How can we improve this particular experience?” We always do our best to avoid leading questions such as “On that feature, would you like to have filtering options such as date, time, and resource type?” Instead, we ask “How would you expect filtering to work on this feature?” Essentially don’t lead or bias them towards an answer. Leave it as open as possible for them to provide their unbiased input.  

The final tip that I would like to share is to debrief with your team immediately after each usability call. This can be a 10-minute recap where each team member shares their biggest takeaways and findings. You can also use this time to organize your notes and key information to make the synthesis process easier once user testing is complete. I have found that conducting these quick touchpoints allows teammates to share information while it is still fresh. Delaying this can be dangerous; you will be surprised how quickly insights and findings begin to blur as you conduct multiple sessions. Remaining organized through the process will make the generation of a readout much easier. We’ll talk about usability testing readouts in a future blog post.  

Ultimately usability testing is a conversation with your end user. During a session, you should remain relaxed and engaged to improve the product and experience for the person that you are testing with. As mentioned earlier, the key to a successful usability test is preparation. Know the motivations of your key stakeholders, determine a product or feature that you want to test, identify end-users to test with, and design a discussion guide that helps you to uncover the strengths and flaws of your product or service.  

For any questions related to human centered design, research, and usability testing, please reach out to the Human Experience team at Softrams!

For more information on Accountable Care Organizations, please visit cms.gov.

Sign up for our newsletter to join our impact-driven mission.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.