top of page
Virtual Goggles

Mixed- Methods VR Usability Study with Surprising Impact

A research project to evaluate the usability of three product prototypes in a novel technology space. My contributions resolved a paradox between preference ratings and safety concerns.

Evaluate the most usable product out of three prototypes.

Project Scope

​

  • Client: Confidential

  • Timeframe: 7 weeks

  • My Role: Research Assistant

  • Team: Lead Researcher, Research Assistant (me), Research Assistant, Lead PM, Lead Engineer, Director of Product

  • Methods: Survey Responses, Usability Test, Observation of divided-attention tasks, Interview responses

  • Tools: Excel, Microsoft Word, Adobe Pro, Powerpoint, SPSS

High-Level Timeframe

VR Timeline.png

The Problem

The engineering and PM teams have a product vision, but were split between three prototype versions: low quality, medium quality, and high quality. Each prototype had tradeoffs in terms of visual quality and processing power.

​

The teams wanted to decide which proposed design was most preferred and had the highest safety ranking.

Assumptions and Hypotheses

  1. Users would prefer and have fewest usability and divided attention errors in the highest quality prototype.

  2. Users would less preference ratings and have higher usability and divided attention errors in the medium quality prototype than the high quality prototype.

  3. Users would least prefer and have the highest error rate in the lowest quality prototype.

My Role

I was the UX Research Assistant in charge of:

  • Improving research plan

  • Creating novel usability tasks

  • Creating novel divided attention tasks

  • Moderating every study session (36 total)

  • Setting up lab cameras and organizing footage

  • Analyzing and visualizing behavioral data

  • Synthesizing behavioral data with preference data

As a VR product, safety is of the utmost importance. As this product existed in an unprecedented territory, I was given the vital task of creating representative usability and divided attention tasks to evaluate the safety of the prototypes.

​
As the prototypes had limited computational power and resources, I had be creative and innovate brand new tasks in an unexplored domain.

​

I ended up creating 3 usability tasks and 2 divided attention tasks, together assessing user safety for each prototype.

​

It was important that the we block out light from a nose-gap in the headset to avoid real world information from contaminating the data. The research lead initially tried to use a scarf and a handkerchief, which were uncomfortable and ineffective. I had the idea to use two tissues folded lengthwise, which was comfortable, cheap, and effective! This increased the validity of the data collected from the entire experiment.

Methodology

The research team determined that a mixed-methods design combining quantitative survey responses, open-ended interview responses, and usability testing would provide the most comprehensive data combining attitudinal with behavioral.

Recruitment: Due to the sensitive nature of the product, we recruited 36 participants internally at the company. We recruited users of varying levels of familiarity to ensure many user perspectives were accounted for.

​
In-Lab: The prototypes were large, bulky, and cumbersome, requiring a lab environment to standardize the findings. Participants would complete the usability tasks, divided attention tasks, survey questions, and interview in a single quality prototype and then view the other two prototypes at the end for initial preference ratings.

VR Headset

Breaking Down the Process

I used Adobe Premier and Excel to code the usability and divided attention tasks. I made highlights of task successes and failures to ground the findings in empathy.

Major Learnings

Users preferred the lowest quality prototype but had more errors on usability tasks than the highest quality.

Users liked the simplicity of the visuals despite their behavior indicating more unsafe actions.

​

This indicates that the safer prototype design should be visually simplified to increase trust while maintaining safety.

The least liked and least usable prototype (medium quality) had no difference between usability and divided attention tasks.

This prototype was intended to be a compromise in visual quality and computing power.

​

In this prototype, their attention was already constrained to the point of low safety, indicating a non-viable product.

Users with less VR experience were less impressed with the prototype

Users who were new to VR had lower preference scores than those with more experience for each prototype. This suggests a misalignment between novel user experience and the prototype capability.​

​

Indicating further research needs to be done to understand novel user expectations for novel VR products.

TITLE OF THE CALLOUT BLOCK

Impact

The team had a 'roadmap' for what users preferred and what design choices reduced safety issues, allowing them to iterate on the product design to meet both product goals.

The research lead is in the process of submitting the study for publication to share the findings to the VR community.

TITLE OF THE CALLOUT BLOCK

Lessons Learned

For unexplored product areas, multidimensional data provide more nuanced results.

By combining user preferences with usability tasks and divided attention tasks, we were able to tease out the underlying roots for why people liked certain designs over others. If user behavior was not accounted for, the team would have had a less safe prototype that people preferred, potentially opening up legal issues when launched.

Small improvements make a big difference

My solution to use two folded tissues to block out the real world context clues increased the validity of every finding from the study. This contribution, though small in scale, meant the team could trust the differences in prototypes were not influenced by the outside world.

Like what you see?

Let's chat.

  • White LinkedIn Icon

© 2024 by Kory Feath.

bottom of page