top of page

COVID-19 Antigen Test

At-home nasal swab test kits for the novel coronavirus.

VAH_packaging090921_edited.jpg

CLIENT

Becton Dickinson (BD)

TEAM

Scanwell Health (YC 18)

  • 2 product designers

  • 2 UX researchers

  • 1 Head of Design (direct manager)

This only lists design team. Contact me for full team details!

CATEGORY

Product features:

  • iOS/Android for mobile

  • Physical nasal kit

  • Physical & digital instructions

  • B2B2C, B2C, 0 to 1

DURATION

1.5 years (Sept 2020-Jan 2022)

Launched in Oct 2021

THE CHALLENGE

Scanwell Health collaborated with Becton Dickinson (BD) to create the first digitally read at-home COVID-19 test. This was the first time BD and Scanwell were partnering together to create a product and with that came its own set of challenges, such as trying to understand the best way to communicate, how each team functioned, and the varying work styles of each team. One major challenge we faced continuously was adapting and pivoting our product design to the rapidly changing customer segmentsuser needs, and FDA requirements and expectations.

THE GOAL

Build a COVID-19 Antigen test that employees can perform at home, receive instant results, and easily share these verifiable results with their organizations.

Our original target market was for employees to share results with their employers (B2B2C). This initial scope significantly impacted the eventual design of the product especially later on when our product became available for the general public.

TLDR; FOR THE BUSY SOULS

MY ROLE

I led the UX research team. I was the main UXR contact and advocate with stakeholders. In the first third of the project, I was the main UX contact until more designers joined the team. 

BUSINESS CONTRIBUTION

Helped over a million users successfully perform the test in the span of 3 months (less than 2% usability fail rate). Produced accurate data on expected failure rate 4 months prior to launch which prevented delays in prod dev timeline.

BEST LESSON

For products that are addressing national emergencies, it is advantageous to time the market because user perceptions, desires, and motivations shift significantly at a wide range (although hindsight is 2020).

MOST SURPRISING INSIGHT

Throughout all 11 usability studies, users who failed to perform the test correctly had a higher confidence level than users who performed the test correctly.

beauty shot 2.png

THE PROCESS

PARTNERSHIP W/BD

The various teams involved in this project worked concurrently with daily syncs. My team at Scanwell led the efforts on everything that involved the mobile app and the overall user experience of the product. The BD team was responsible for the marketing, manufacturing, and distribution strategy.

Aside from the clinical study which was run by BD, all other usability studies were run and led by me and our UXR team.

11

usability studies

300+

users interviewed

20,000

data points

8

competitive analysis reports

IMG_1031_edited.jpg

UXR PROCESS

STUDY TIMELINE

STUDY INTRODUCTION

Product managers, business dev, and/or BD would typically request answers on certain features and flows. Some studies were initiated by design team.

RESEARCH PLAN + INTERVIEW SCRIPT

UXR team would break down all aspects of the study in a google doc and make adjustments based on feedback.

PROTOTYPE + RECRUIT USERS + SHIP KITS

Logistics, logistics, logistics.

INTERVIEW + REPORT

Mid-way reports were common for faster data synthesization.

MY ROLE

EVOLVING FROM SOLO DESIGNER T0 UXR LEAD

User research and usability testing were my primary focuses in this project; however, during the first third of the project, I had a more general UX role- in charge of all things interaction design and instruction design. As we brought on more team members to the design team, I was able to hone in on our research processes and dive deeper into the major potential risks in our product.

272718992_1012199949505866_4214093397032886046_n_edited.png

UX Designer

Protocol creator

Recruiter

Interview Moderator

Notetaker

Logistics Lead (i.e. BFFL with FEDEX employees)

Presenter

User Advocate

Deck maker

Mentor

INITIAL CHALLENGE DURING THIS PROJECT:

HOW MIGHT WE ALLOW EMPLOYEES TO SHARE ACCURATE AT-HOME COVID-19 ANTIGEN RESULTS WITH THEIR ORGANIZATION?

THE CHALLENGES

THE INITIAL PROBLEMS

Our goal was to create a test kit and digital app that would allow users to perform an antigen test in their homes and receive results the same day. As it was the first digitally read, at-home COVID-19 Antigen test, many challenges existed.

CARETAKER USER SEGMENT

The original target market widened to a more general population to accommodate for shifting user needs during the pandemic, bringing the additional challenge of designing for scenarios where adults may need to perform the test for children and older users.

REGULATORY RULES

Replicating real-life testing environments with remote users was difficult due to IRB rules regarding the collection of bodily samples during studies. Changes post-FDA authorization also had to be thoroughly analyzed to contemplate if they were too drastically different from the original design.

TESTING PHYSICAL & DIGITAL

As this product required a user to perform the physical test with digital components (scanning and instructions), we had to figure out the best remote strategy for testing all components concurrently during the peak of the pandemic.

OH WAIT,
THERE'S MORE

Our intial goals changed and expanded as FDA refined and altered the requirements for this test with growing data and understanding on the workings of this new virus.

B2B2C + B2C

We anticipated that organizations would purchase large bulks of these tests to allow their employees to perform the test prior to coming back to work; however, the user segment expanded when it was decided that the test would be available for anyone to purchase through Amazon. The change in distribution tactic caused device compatibility to become a big pain point.

WEEKLY REPEAT TESTING

We did not foresee how long the pandemic would last in the states, and we assumed that the test would be for one-time use. As the pandemic extended past year 1 to 2, the length and design of the flow had to be significantly updated to accommodate repeat users as that feature became a common and essential value proposition to users in the US.

2 TESTS PER KIT

The packaging and visual assets were designed for one test per box. Post-submission to the FDA, we learned that the FDA now wanted serial testing in every COVID-19 test. This meant we had to quickly figure out the most seamless and least complex method of including 2 tests in each kit without significantly altering the packaging design, visual assets, and user flow in 2 months.

USER JOURNEY

GUARANTEED FEATURES

All of the diagnostic tests at Scanwell have "universal features" in the user experience as I like to term it. This was also the case for the antigen test kit.

Layer 6.png
Layer 6.png
Layer 5.png
Layer 7.png
Layer 3.png
Layer 4.png

01

PHYSICAL & DIGITAL

User receives test kit and downloads our app

02

TEST IDENTIFICATION

App identifies test type

03

SELF-TESTING

User performs test while following app instructions

04

SCANNING

User scans the test strip/cassette with scan card

05

RESULTS & NEXT STEPS

User receives test results

THE APPROACH

EXPERIMENT. ASSESS. GO.

We had many unpredictable obstacles that forced us to alter our approach. To handle the unexpected, our main strategy was to constantly and rapidly innovate and experiment with various programs, tools, and processes that could increase the efficiency in how we tested concepts, collected data, and shared data.

CONSTRAINT

APPROACH*

FREQUENT TIME-CONSUMING UPDATES TO STAKEHOLDERS

CONSOLIDATED WIREFRAMES ON FIGMA AND GOOGLE SLIDES

TECHNICAL LIMITATIONS FOR STUDY PROTOTYPES

EXPERIMENTED WITH ORIGAMI, TESTFLIGHT, AND PROTOPIE

NOT ENOUGH MAN POWER FOR DESIGN AND RESEARCH

HIRED 2 FTE FOR DESIGN AND RESEARCH TEAM

*Not all of our approaches were perfect solutions. To hear more details, contact me!

UNDERSTANDING THE USER'S MENTAL MODEL

Another essential point we had to constantly question was the design of the studies themselves. We had to put ourselves in the shoes of our users.

How do we capture enough data about a user's home environment to understand their scanning struggles before it becomes too invasive?

Would hosting a remote study through zoom deter a user from naturally moving away from the camera to get a better scan?

Layer 1.png

In what mentality would users actually be using the test? Do they have to think that they show signs of COVID-19? In contact with someone positive? How long ago?

THE FRAMEWORK

TRIANGLE DEFENCE

We had three shields to our strategy:

HYBRID USABILITY STUDIES

We ran usability studies that combined both quantitative (statistical data on task analysis) and qualitative (the why and how data) aspects of user research. It was a true mix between summative and formative usability methods.

WEEKLY TESTING PROGRAM

Post-market launch, our reliance on live data for improvements decreased the velocity and depth of our insights. For quicker data retrieval, I launched the Weekly Testing Program (WTP) where we ran highly focused and small usability interviews on a weekly basis (e.g. repeat users, new concepts).

01

02

03

COMPETITIVE ANALYSIS

The market was shifting rapidly and the number of high priority tasks increased exponentially. To ensure that we still had a thorough understanding of the landscape, I launched and led weekly 2 hour deep dives on competitor's test kits with design team members.

01
HYBRID USABILITY STUDIES

This mixed-methods approach was our main defender and most go-to tactic. Remote interviews were structured so that the first half would consist of observing users performing the test and taking note of which tasks users failed while the second half was deep dive questions into the why behind the users' actions.

The combination of different data allowed us to not only get success rates and the potential percentages on how many users would behave in a certain way, but also the possible reasons for users' reactions.

Studies ranged anywhere from 15 to 30 users, 30 to 60 minutes, and lasted 2 to 3 weeks to complete the whole study from creating the research plan to sharing the results with stakeholders.

Screen Shot 2022-01-18 at 10.47.12 PM.png
Screen Shot 2022-02-03 at 3.33_edited.jpg

02
WEEKLY TESTING PROGRAM (WTP)

Relying on solely Defence Shield 1 slowed down our ability to validate new ideas and concepts as the list of priority items expanded. To conquer this, we launched weekly "mini" studies in Q4 2021 that allowed us to quickly gauge which ideas and new wireframes would be potentially high-risk changes.

I proposed that we stagger our studies so that we could obtain more value from each user since we were spending a significant amount of time and money to recruit participants, ship test kits, and run these interviews. One method was to have first-time users intentionally save the extra kit they obtained for a follow-up study in a couple of weeks. This allowed us to gain even more insights into not just first-time users but also serial testers (2 user segments in one!).

03
COMPETITIVE ANALYSIS

Usability studies weren't enough to create a strong defensive strategy. Every other week we were hearing about a new competitor, and we were playing catch-up trying to keep up with the industry's intense pace in product development.

To prevent the team from getting overwhelmed, I set up 2-hour weekly competitive analysis sessions with design and research team members to run deep dives on products that had already been on our product team's radar.

We analyzed end-to-end user journies, marketing strategies, product pain points, manufacturing quality, instructions design, and onboarding/offboarding experience to formalize insights on what we should and shouldn't do.

Reports were then shared with the rest of the team through Slack, Threads, and Clickup.

Screen Shot 2022-02-03 at 2_edited.jpg

HIGH LEVEL INSIGHTS

FOR MARKET LAUNCH FACTOR IN CHANGES IN USER PERCEPTION AND MOTIVATIONS

SCANNING DATA NOTORIOUSLY DIFFICULT TO CAPTURE REMOTELY

VIDEO QUALITY AND FIDELITY MORE IMPACTFUL THAN WARNINGS AND COPY

USERS WHO FAILED ARE MORE CONFIDENT IN THEIR PERFORMANCE THAN THOSE WHO PASSED

DISCOVERY PHASE SIMPLIFIED

INSIGHTS CATEGORIZED 1, 2, 3!

There were three major data buckets that we were primarily focused on throughout this project.

01
TASK PERFORMANCE

Our first and primary objective was ensuring that our instructions were clear enough that users would correctly perform the actions needed to complete the test. Task analysis through observational studies were the best plan of attack to capture this type of data. Nearly all of our usability studies included task analysis.

Screen Shot 2022-02-03 at 4.50.49 PM.png

One of the earlier task analysis flows I created

02
SCANNING 
PERFORMANCE

Equally as important as performing the test correctly was ensuring that users could scan the test stick correctly so that we could analyze their results. We ran numerous internal studies at small and large scales to ensure that we were capturing enough data on the major ways a user could fail to scan. In 2021 Q4 I thought we needed to perform in-person scanning studies for more accurate data and created a complete research plan and proposal. For more information, contact me!

Screen Shot 2022-02-03 at 4.39.47 PM.png

Internal scanning study with BD employees

03
RESULTS 
COMPREHENSION

One of our main value propositions revolved around being able to send reliable test results to organizations. At the same time, we had to ensure that our results content was aligned with FDA regulations. We juggled and debated on how to share dense information, ensure users understood their next steps, and create a clear navigational system to share results.

Screen Shot 2022-02-03 at 4.54.50 PM.png

Earlier iterations of the results screens

beauty shot 3.png

ONE THING I WOULD DO DIFFERENTLY

I regret not pushing harder for more clarity on which user segment we were targeting at the initial market launch. Relative to this, I regret not pushing for more onboarding data prior to our launch.

Both aspects killed our app store and Amazon reviews because we had not accounted for device compatibility issues with the general public. Since we had assumed we were going to ultimately be designing for employees, we had a specific set of phones adapted to our camera scanning capabilities. We strategically tested our algorithm on the latest and most common smart phones since it would've been impossible for us to analyze every single smart phone in existence.

Screen Shot 2022-02-03 at 10.33.04 PM.png

"I really like this app. [These are] better results to send to an employer because otherwise you just send a picture of the test stick to your boss."

WTP user_Week 4

MAJOR INSIGHTS

HIGH-RISK PAIN POINTS ANALYZED

There were two major areas that needed long-term testing and numerous iterations to get right. By the end of Jan 2022, there was still room for improvement in both features.

PP1: LET ME SCAN IN PEACE!

The initial major pain points revolving scanning included:

LONG PROCESSING TIME (15 MIN) + SHORT SCANNING WINDOW (5 MIN)

Users only have 5 minutes to successfully scan their test cassette and scan card once it has finished processing for 15 minutes. This increases the chance a user may forget the correct way to scan and troubleshoot.

DIFFERENT + MORE COMPLEX COMPARED TO COMMON 2D SCANNING PRODUCTS 

Unlike most scanning products (e.g. mobile bank check deposits or PDF scanning apps), our scanning experience required users to scan a 3D object that was sensitive to colors and had a very specific time limit. 

CHANGES IN NATURAL LIGHTING + LOCATION OF TEST KIT ITEMS

Since the test takes roughly 20 minutes to complete before reaching the final 5-minute scanning step and users are interacting with many items, there was a high chance that something may have moved out of place from when they first started the test and/or that the lighting conditions had become worse while the test was processing.

Screen Shot 2022-02-04 at 2.58.56 PM.png

Older wireframe iterations of the HLC instructions

20200615_131824.jpg

Early storyboards of a "How to scan" video I created

Screen Shot 2022-02-04 at 2.58.39 PM.png

Quick sketch mockups I created during a brainstorm on what the best instructional diagram could look like

THE BIRTH OF THE HOME LIGHT CHECK

Instructing the users on how to scan effectively right before the 5 minute final capture scan wasn't producing the results we needed. It was too last-minute, there was too much brand new information a user had to absorb and retain in an already time-sensitive, stressful environment.

That was how the Home Light Check (HLC) was born. Prior to starting the actual test, a user would read static images and copy that explained how to scan correctly.

They would then attempt to scan just the scan card in the Home Light Check that emulated a close situation to what they would be experiencing during the Final Capture step. 

Our hope was that this would be close enough to the last scan that users would have figured out all the issues and variables that could positively and negatively impact their scanning experience.

beauty shot 4.png

PP2: IT TAKES TOO LONG TO TAKE THE TEST MULTIPLE TIMES!

Serial testing was not something in our horizon when we initially submitted our application to the FDA for EUA (Emergency Use Authorization); however, based on many of our studies post-launch, we realized this would be a critical feature.

Repeat users were becoming impatient with the mandatory videos in each step of the test. We had originally decided to make all videos mandatory so that we were absolutely confident that users were at least hearing or seeing the required instructions that were set by the FDA.

As the pandemic transitioned from a temporary blip in our timeline to a potential endemic, we began to shift our focus to the needs of repeat users.

IMG_1045.HEIC
IMG_1070.heic

SUPER COOL UPDATE TO THIS EXPERIENCE BUT CANNOT YET DISCLOSE :(

I cannot disclose the details on how we changed this feature as it has not yet launched as of Jan 2022.

For more information, contact me!

beauty shot 5.png

"On my first day at Scanwell, I witnessed Michelle running 30 back-to-back interviews to ensure we had enough Android users represented in our study.
All in 2 days."

Katherine Hsiao

UX Researcher at Scanwell Health

THE RESULTS

Conclusion by the #s

As with all early-stage start-ups, we didn't get it perfect the first time around but we did make significant improvements throughout the year we built this product. I'm especially proud of our high usability percentages.

21.4%

improvement in scan success rate since the introduction of the Home Light Check

12.2%

7.7 to 8.92 usability rating improvement from first to last usability study

42%

51.6% successfully performed the test during the first usability study. 94% succeeded by last usability study

After a year working with BD as a major stakeholder on this project, BD acquired Scanwell Health at the start of 2022

Lessons Learned

NOT HEALTHY NOR SUSTAINABLE TO RUN 30 INTERVIEWS IN 2 DAYS

NEED VARIOUS PROTOTYPING STRATEGIES FOR MATURE PRODUCTS W/OVER 1 MILLION USERS

I TEND TO STICK TO STATUS QUO WHEN I'M BURNED OUT INSTEAD OF TAKING THE INITIATIVE

MARKETING PLAYS A HUGE ROLE IN B2C PRODUCTS

bottom of page