Egencia Onboarding

Overview

Egencia, the B2B travel subsidiary of Expedia, helps 8 million global business travelers book and manage their trips all over the world. Competitors include Booking.com, Airbnb, Concur, and TripActions.

By executing a multi-pronged self-service strategy, our team saved the company $10M annually (7% of operating costs).

My role

Sr. Product Designer

The team

1 x Sr. Product Manager 1 x Researcher 1 x Data Analyst 6 x Engineers

Timing / Duration

2018 / 3 months

Background

Companies can improve their bottom line by doing 1 of 2 things -- increasing revenue OR reducing costs. I was on a cost-reduction team during my time at Egencia.

In 2018, the executive team asked our team to reduce our highest operating expense in the company -- $150M/year spent on servicing customer support calls.

Reports indicated that nearly half (gasp! 😱) of all travelers called Egencia travel agents at some point on their journey.

Each call cost the company $17. At Egencia's scale, even a 1% contact reduction would lead to $1.5M in savings.

Breakdown of user calls, by country.

The Users

Personas were largely determined by job function and associated frequency of traveling. At a high level, these were categorized as the following:

  • The Executive (1-2 trips/month, booked by Executive Assistant)
  • The Road Warrior (weekly travel, often to same locations repeatedly)
  • The Annual Offsite Traveler (1-3 trips/year)
  • The Executive Assistant (booking on behalf of 1-3 company executives)
  • The Bulk Arranger (booking for big company events -- trainings, offsites)
Egencia personas (outdated, but you get the idea)

The Strategy

To develop our strategy, we analyzed our goal through 4 lenses:

  1. What do our customers expect?
  2. What are our competitors doing?
  3. What are our team capabilities with current resourcing?
  4. What does the company want/expect?

The Bets

After analyzing the problem from these lenses, we narrowed our efforts to 3 team charters, which were comprised of 5 feature bets, as well as committing to test across the product.

  1. Educate Users -- Our team's core charter
  2. Create an effortless journey -- Think horizontal, across product verticals
  3. Influence vertical product teams -- Evangelize friction-reduction
Our team charters with correlated feature work (in yellow)

We then prioritized our big bets. It became clear that we could make big impact with our contextual help center AND deploying A/B tests across the horizontal product experience would be our best bets.

6 work efforts mapped via Value x Effort matrix

My Process

For the purposes of this case study, I'll discuss the help center redesign, rather than the many A/B tests we ran, including the IVR and Chatbot projects (these are outlined in other WIP case studies).

Help Center

To begin my discovery process, I audited products for inspiration, and even user-tested against our competitors (link).

The help center was needing some product love -- this was a known issue. Content was relevant, having been refreshed by the marketing and customer support teams, yet the help center itself was merely a list of articles shown on it's own app.

We wanted to redesign the help center to make the content more dynamic and timely, and I had a strong sense that making it contextual would be the ideal experience. Plus, it played well with the future idea of integration of a chatbot (see further case studies).

I was heavily inspired by Airbnb's help center due to it's flexible, intuitive, contextual support. Airbnb's help center competitive tested the best out of all the competitor tests I performed on usertesting.com.

Hypothesis

By creating a contextual help center that surfaces relevant help articles before showing contact options, we can help users to more easily self-serve without leaving their current flow, thereby decreasing call propensity and contributing to higher net revenue.

Solutions

To begin, I blockframed as many ideas as I could to get all the concepts out of my head. After a week of iteration filled with internal feedback from cross-functional stakeholders and gathering constraints from engineers, I dove into testing 3 concepts via Usertesting.com.

Option 1: Channel Guidance

Intended to tactfully interrupt with help article

Option 2: Channel guidance + Rich trip info

Intended to tactfully interrupt with help articles with targeted trip informatio

Option 3: Simple and Straightforward

Intended to get quick signal and verify our hypothesis of contextual

Outside of Option 2, the unmoderated user testing results were positive for 1 & 3. Users felt Option 2 made them jump through too many hoops to get their answers, which was frustrating to hear because that was the option that would have allowed us to give the most confident help articles.

Results

For v1, the PM and I decided to launch with Option 3 to cut down on the engineering cost and get something out to market quickly.

The test ran to statistical significance quickly. It had FAILED miserably -- calls went up by 5.9%. Hmmm, what's up with that? 🧐

Testing

When launching the v1 design, I had suspected that the Contact Us fallback button at the bottom of the page might be too discoverable. Also, we discussed that the button could only show AFTER a user searched, rather than merely showing it at the top level. So, we decided to run a multivariant test as follows:

  • Variant A = Legacy flow
  • Variant B = V1 + reduced button prominence
  • Variant C = V1 + reduced button prominence + button shown after search

Variant C drove calls down by 2.8%, and came out a clear winner in this multivariant test. We rolled it out globally.

My Learnings

Aggressive testing is key to rapid learning

As a public company, executive leaders were very conservative with our testing strategy so as to not 'rock the boat' when it came to net revenue. Therefore, we were encouraged to test in smaller geographies with high CSAT. This reduced our bucket size of potential users, which led to longer than necessary wait times to reach statistical significance. I grew weary of such a cautious approach as some of our tests ran for months! Sometimes you need to experience how not to do things to realize how it should be done. New motto: Ship frequently, measure a lot, and learn quickly.

Keep it ethical AND practical

In the end, I'm glad we ran the multi-variant with the more extreme Variant C option. I had worried that it was a borderline dark pattern, as it didn't feel ethical to bury the Contact Us button. So, I made sure that we monitored CSAT as a correlating metric. We ended up doing no harm to customer sentiment, and found that more customers were reading and finding and reading the suggested help articles. I was pleasantly surprised. Sometimes, what may feel unethical is actually just what the user AND business needs.