RISE:
Hardware usability study

Brief

The client is a manufacturer of household appliances and had employed RISE to conduct usability studies of new functions in a dishwasher, in order to identify issues with affordance and usability of two new features.

Client: RISE client project
Role: UX Researcher
Deliverables: Writing test scenarios, document user tests, summary report

Note: The work is under NDA so I’ll keep details and images vague.

Core challenge

Writing test scenarios that isolate particular machine features as well as grading those into different fail/pass states is tricky, as is using an unambiguous language that all stakeholders interpret similarly – especially in an international team spread over the world.

Methodology & my contribution

Five test subjects were invited to the RISE UX Lab in Stockholm to perform scenarios under remote observation by UX researchers and client stakeholders. Subsequent tagging and thematic analysis is compiled into insights for the client to incorporate into their design work.

I assisted the senior researchers with writing test scenarios and measure level of fail/pass,worked during the user tests and post-test analysis with client, thematic analysis and tagging in Dovetail, proof-reading the finished client report.

OUtcome

Our tests indicated a low level of affordance on both of the tested features, and the report offered actionable insights to help the design team refine their design process.

Preparation

screenshot_tall

The client provided a brief with an outline of the two functions they’d like to test for usability. It was our job to translate the somewhat ambiguous wording into a test protocol that would go through both the features in abour 30 minutes, with multiple pass/fail states for each test.

The client delivered the dishwasher to the UX Lab in Stockholm and the local team set it up along with all utensils and dishes that were to be used.

Once the test protocol was written we performed a pre-test with one subject, and based on the result and feedback from the client we revised the protocol – “leave tray A in upright position at start as a cue” and “before failing the user completely, add a second demonstration trigger”, etc.

Test day

screenshot_tall

On the day we had five subjects go through a guided test scenario individually, guided by a senior researcher onsite. Myself and another researcher took notes remotely, and a couple client stakeholders participated as well. Each test took about 40 minutes, and was followed by half an hour debrief with the client where we compared notes.

The clients have done these tests for many years and know that the best feedback is watching people get frustrated and fail – a neutral test protocol and an detached test administrator is a great way of getting at those results.

Even when subjects failed completely on a task they often gave the features full marks in the exit survey – “oh yeah, now that I know the point of it I really like the idea” – so if you want to improve on a product it’s better to watch people in what they do, rather in what they say.

People don't want to be perceived as either impolite or stupid, so their tendency is to view their failures to understand something as a personal oversight, not the fault of the designer. Hence the term “user error” appears when we try to explain mishaps and accidents – instead of putting the blame squarly at the doorstep of those that didn’t vet their products well enough. 

Summery & report

screenshot_tall

When doing observational studies it’s important to be aware of ones biases and tendency to infer intent instead of only going by obseved facts – after all we want to know what happened, not guess at what users were thinking – so a rigorous coding and collation of the results is important.

Once the tests were done we had observational notes from three researchers as well as comments from client stakeholders, and it was time to group, tag and summerize the material. Using Dovetail for the clustering and thematic analysis we summarised the results in a report with actionable insights and delivered them to the client.

Testing takes time and costs money, but the cost of implementing even seemingly simple features that turn out to be faulty or unusable far outweighs any upfront testing expenses, and it was fascinating to be part of the process with such an experienced team, both at RISE and client.

I got a nice evaluation for my work!

I prepared a usability test together with Mateusz during his time at RISE. The idea was to show him how I conduct usability tests in the UX lab, but it ended up with Mateusz being able to assist me and help me with all the tasks excellently. Mateusz developed appropriate tasks for the product test, wrote scripts and triggers, observed and took notes during the test sessions. Then, he actively participated with me and the design team in the analysis work and wrote a concluding report. Mateusz performed all of this in an exemplary manner, demonstrating his knowledge of the theory and asking the right questions to deliver in a high-pressure situation for a major client. I highly recommend Mateusz to anyone considering him for their future team!

Johan Rindborg, Senior usability tester at RISE – original in Swedish on my LinkedIn profile