Aperture Health

As the first designer hired at Aperture, I introduced user research, prototyping, and lean experiments to the credentialing team and its stakeholders. The hands-on, human-centered work I did solved process inefficencies with small solutions rather than big deliverables. We created $1M+ in annual savings in our first ten months.

Healthcare Provider Credentialing Systems

Different success and error notifications the NPDB service returns.

I led user research and synthesis on the Synergy product. I designed protoypes, lean experiments, and stories. I worked with the product managers to define business metrics and gather data.

Our first release improved the quality of imported office address data. Synergy users went from manually keying offices in 82% of files to just 12%, while overall file quality improved by 4% to 93%.

Our second release eliminated a mandatory 45 minute wait and has initially increase file-per-hour from 2.00 to 2.44, an annual savings of $850k.

Figma (Requires Account)

Background

Aperture Health is a credentialing verification organization (CVO). Insurance payers (eg. Kaiser) and medicaid programs contract healthcare providers (doctors, nurses) to their health plans. Aperture then credentials by confirming application information and verifying data from third-party sources.

Verification Coordinators (VCs) are solely responsible for an application from start to finish. They Kick Off a file to make sure:

An application goes from New VC to App Complete to Close Out in order to get to the client. There are several pitfalls and forced delays along the way.
35% of applications require outreach during kickoff, but fully 98% of files require workflow to run NPDB. Full Image.

National Provider Data Bank (NPDB) holds malpractice records on providers nation-wide. An individual's NPDB record comes back clean or with hits.

To run NPDB, VCs must mark the application "App Complete" and wait a variable amount of time for the system to return NPDB. So instead, they kick off as many files as possible and come back the next day to Close Out files with NPDB. This means re-checking the entire file, finishing any work left over from Kick Off, and finally submitting to the client.

Service Design Discovery

The credentialing process just outlined wasn't known to us. On day 1, our stakeholders wanted to increase file-per-hour (FPH, or overall total output) and credentialing quality (fewer mistakes = happier customers).

For the past 15 years, Aperture's workforce has been using Advantage to credential providers. In the past year, VCs have been using Synergy, a front-end replacement for Advantage. However, specialists did not have their own version of Synergy. Our stakeholders' first idea: get specialists on to Synergy and reap FPH benefits.

Windows 2000, virtual desktop Advantage.
Advantage requires remote login, specialist knowledge, and careful window management.
Bootstrap Synergy, browser-based VC work.
A 2.5 year effort created Synergy, a browser-accessible UI that sits on top of Advantage. In its first year of use, FPH increased from 1.8 to 2.0.

With that in mind, we interviewed the gamut of credentialing users, over 20 Aperture employees in a week. We asked open-ended questions about their process, tools, and pain points.

A calendar screenshot showing 20+ interviews in our first week.
Our first week. An ideal Discovery involves direct user interviews to "drink from the firehose".

We used first-hand accounts to build out a credentialing service blueprint. This helped us understand our team's knowledge gaps where we needed to re-interview. It also ensured we identified "vertical" pain points, issues that affect the whole system rather than a single user type.

Now we look for inefficiencies and pains across this whole workflow.
The service blueprint tracks a file across our users, the front-end + back-end systems, and even physical touchpoints like the mailroom. Highlighted in yellow, the vertical pain points we focused on in Framing.

I led the team through "going wide" and "narrowing" in a problem space. The team picked two starting points: improving data imports into Synergy and eliminating NPDB wait time.

Problem generation stickies.
I facilitated problem generation as a team across the service blueprint, personas, and type of file. PMs and engineers also attended interviews so the whole team could participate in the exercise.
New VC process.
The blue boxes represent our first two tracks of work. In a future state, VCs are given better application data before they touch the file and can run NPDB during Kick Off.

NPDB: From Paper Prototypes to Lean Experiments

To finish a file during Kick Off, users have to request and receive the NPDB before they finish their first pass. This means they have to understand, like, and use our new NPDB feature page.

Paper 'how might we' prototypes from the team
I ran a design studio to understand how our team pictured the NPDB solution for VCs and translated into a Figma prototype.
Screenshot of our first
One iteration: show "Run NPDB" along with the "query NPDB" data (left) and app (right).
Sticky notes from the user interviews. Users could explain excitement, but only after they understood the flow.
We synthesized direct quotes and observations from 5 users with the above prototype. Users could explain their excitement, but only after they understood the flow.

When VCs saw the new NPDB page in the prototype, they were confused, unwilling to click new buttons, and apprehensive about moving NPDB "up" in the process. VCs expected the NPDB to return much later or possibly kick them out of the file. They were overwhelmed by the queries and application content below the banner. However, they were excited and surprised to get back the NPDB immediately and could explain the value of immediate return. This moment of delight was strong enough that it was worth iterating on the concept to simplify the layout and better set expectations.

A new version of the NPDB query with confusing elements removed.
New, cleaner version of the NPDB query.

Over the course of four iterations with 4-6 VCs, we validated user interest and increased comprehension. I then proposed we do a lean experiment with real files. It was important to quantify the business value of new NPDB and understand if interview attitude (I like this) translated to real-life behavior (I will submit the file in one pass).

Watching users go through real files.
Watching a user go through a provider file. In our lean experiment, we pre-worked and pre-ran NPDBs for applications. While VCs worked applications, we then sent users a Slack message as "NPDB Bot" alerting them that an NPDB had been run. Our hypothesis: VCs would be willing to close a file then and there, which would be faster than their current process.

Across two lean experiments with 6 users, 5 users noticed NPDB, trusted the result, and Closed Out their application. For files with no outreach required, VCs were 80 to 100% faster than their normal FPH. NPDB goes into production in September '21 with estimated annual savings of $850k.

By ramping up the fidelity of the prototypes, I was able to to determine user excitement, comprehension, and value in that order. Also, by running 86 user interviews with 64 users (65% of the userbase), training was already half-done.

Data Quality: Eating our Own Dogfood

Synergy imports the new application PDF and merges it with existing Aperture provider data. This theoretically saves a VC from keying everything from scratch off an image of the PDF. A VC should instead edit, add, or remove information based on how "right" Synergy merged data things.

Through VC interviews, we determined office location was the most frequently edited, easiest imported data to fix. Synergy was regularly a newest office's address rather than the fixed mailing address.

From there, the PMs and I went into the files ourselves. It was important to see the data and the extent of data quality fails first-hand. To do this, we became experts at the VC office process. We went through over 60 files to establish types of import errors, error rate (85%) and potential fixes.

Watching users go through real files.
We set a baseline for demographic, office, and license import using real credentialing files.

After improving the import, merge, and affiliation of offices, we worked offices for all newly imported credentialing files for a week. This helped us check for any new issues, determine what hadn't been fixed, and establish a new fail rate for office import. It was also important to develop empathy for our users by experiencing the work they did and making sure we didn't make their process worse.

In three rounds of releases, offices imported and affiliated correctly 88% of the time, up from 18%. Eating our own dogfood was the key to releasing worthwhile, safe, and incremental import releases. The overall file quality has improved from 89% to 93% since release.

The address section of Synergy.
Every morning, the PMs and I would go through the day's new files and work offices. (Left, imported data, right, application.)

Retrospective / Do Differently

The Advantage/Synergy project was data-heavy and full of complex flows. My user interviews and tests are only as worthwhile as the content put in them, and the project was an opportunity to collaborate with PMs, users, and SMEs to become an expert at our product. We ultimately found, validated, and delivered two features that speed up credentialing and increase quality.

In retrospect, the hardest part of the NPDB service was integrating the request and response into Synergy/Advantage itself. Earlier in the project, the PM and I discussed creating NPDB as a standalone site where users could request and download NPDBs themselves. Users specifically had issues with file upload in Synergy and we worried this would create a roadblock for users. While integrating NPDB was the right user decision, I wish I had brought up testing with the standalone site to see if we could simplify the design.