Designing a Single Experience for Different Users

Knowledge / Inspiration

Designing a Single Experience for Different Users

Continuous Design
UXDX USA 2021
Slides

At Kaplan, Raina was tasked, alongside a digital transformation team, to shift the in-person learning experience for tens of thousands of students to virtual classes. There were many elements at play with a culture shift to being product-led, a business looking to define the ROI of continuous innovation and a single customer experience being designed to serve learners across 10 different courses. So how did Raina's team design and advocate for a single customer-centric UX?
In this session, Raina will talk about her journey through this transformation and touch on:

  • Why the team moved from using personas to mindsets in order to create successful and consistent user experiences.
  • The metrics for UX she developed to measure success throughout the process, and
  • The challenges she encountered in this journey and learnings she picked up along the way

Thanks for joining me today. I'm here to talk about how to keep UI simple and cost down by designing a single experience for multiple types of users. But by way of introduction, my name is Raina Mehta. I'm the Head of Product and Design at Kaplan. I've spent the last 15 plus years designing digital experiences at companies like Amazon, NBC Universal, Prudential, and Kaplan. I ended up in product largely because I'm a maker at heart and a collaborator who enjoys seeing ideas come to life in the digital landscape. And those skills have served me really well, in product for these 15 plus years. So, a little bit about my approach, one is that I just thought it was important to take a minute and talk about the ways that I operate within product and design which I think have led to some of the success. And I think at the heart of it is really that I consider product management a team sport. It involves many different functional areas and constituents. And really once organizations understand what the product function is all about, they have tons of ideas to share, and then there are expectations to go along with it. So, I think that the joy and the challenge of being a product manager is really being able to manage all of those ideas and have those collaborative conversations understand leadership, barometers for success and then respond to those in some way, shape or form. So, collaboration is really a key to success as a product and design manager. Secondly, customers as well are critical inputs to the product roadmap. So, the success of an organization, I think fully depends on your ability to ingest customer feedback through qualitative or quantitative means and having the ability to know when each of those is appropriate is both an art and science. So, qual and quant research really come together to sort of inform and paint the picture of who your customer is. And once you understand your customer, you can get a product that serves their needs. Finally, I think product management definitely requires a humbleness and a willingness to be proven wrong as well as the ability to pivot and operate in what is an ambiguous environment. So, the desire to be led by the customers, I just mentioned instead of your own intuition and gut instinct is a huge factor in the success of a product manager. And it's also something that takes years to really adopt and understand and hone. I'm speaking from my own experience. So, having said that I wanted to talk, as I said, a little bit more about Kaplan and some of the experience that I've had there. You may have heard of Kaplan most people know about Kaplan from their test prep business. So, they offer courses in person and online for preparation for all types of exams including the ones you see here, the SAT all the way to the UNCAT, the GRE, the bar-med, nursing. So, when I started at Kaplan about a year ago, my job was to redesign the 15 to 20 different types of online test prep courses into one singular experience, and so it was a daunting task because basically, students of each course were considered unique and deserving of their own custom experience, their own curriculum, their own methodology for learning. It made it really tricky to think about how we would reduce costs by building something singular. So, that was the challenge as you'll see here. Every test had a persona that looked very different. So, we had essentially too many custom experiences, too many personas to design around and as a result, also a multitude of backends and operational differences. So, being able to figure out how we build something more cohesive, and singular for all of these different students was really going to help us with margins with our overhead and with processes, everybody was really excited about it. It was a real challenge to figure out how to do it. One of the early decisions that I made was to move us away from personas and really thinking about every student as so unique. So, I think personas sometimes in some instances will trap us as product managers and design managers in the weeds. We think about idiosyncrasies that are very related to a person and so we needed to extract ourselves sort of bring our thinking one level up. So, instead I turned to mindset segmentation and you can Google mindset segmentation if you want to learn more about it, but essentially mindsets help you cast a wider net to really build for a broader range of students. In our case mindsets focused more on the goals, the motivations, the ambitions, emotional needs, beliefs of the users. And instead of having a persona now, such as Joe Smith is a 35-year-old male who's studying for his CFA because he wants to take his career to the next level. You're really thinking about Joe as a self-directed learner. He has a high level of confidence in his ability to learn material on his own. He needs flexibility in his schedule. He's impatient with group classes because he wants to learn at his own pace. You can see the difference in how you think about maybe a design or a product and the features that are going to guide the design as you have a little bit more of a broader categorization around mindsets and so we found a lot of commonalities across these 15 to 20 different student personas as we looked at mindsets and that enabled us to build something that was more common across all of these students. So, we took our legions of research that we have on our test takers. Kaplan is great about having a lot of research and anecdotal qualitative research available. We organize them into mindsets and what we found were two distinct and clear delineations in the ways that people learn and choose to engage in test prep. The first is that many students are Guidance Seekers. They want to be led through their learning. They want a teacher who can create a roadmap for them and keep them on track, keep them accountable, and the second is the self-directed learner. So, they are the mindset I was talking about earlier. They're more confident in their abilities and they want to sort of have the autonomy to do things their way and create their own roadmap. Actually, what was very interesting about this exercise is that for so many years Guidance Seekers where the way that most students really we're automatically categorized as Guidance Seekers, but nowadays, we're finding more self-directed learners because there's so much self-direction inherent in so many online services and with personalization and gamification students and like digital natives are really learning how to navigate through digital assets on their own. Whereas Kaplan was originally building primarily for Guidance Seekers, we realized that actually a lot of the population is now skewing towards these self-directed learners and that helped us just sort of plan and strategize how we want it to grow our business. Once we were able to draw these delineations, we're able to then very clearly create one experience. In the most robust and comprehensive journey you'll have a student who takes the diagnostic test it tells us a little bit about their skills and aptitudes. And from there we sort of customize his curriculum. So, they'll take a live class. It'll be followed with videos where they can sort of dive deeper into topics. Then we'll test them through questions. And finally, they'll take an actual practice exam that is akin to the the actual SAT, for example, as they get closer to test day, then we have another live test prep like final review. So, this works fine and this was our experience for the Guidance Seeker, but what became interesting then since we had broken this into widget, for the Guidance Seeker, we knew then based on what we discovered for the self-directed user, that we could just present only a subset of these widgets to the self-directed student. And so, the whole linear path actually stayed the same, but we just turned on and off different widgets based on the mindset map about particular students. So, a self-directed student would come in and take a diagnostic and then based on that, we would present them with practice. They just wanted to practice like based on what you know about my strengths and weaknesses just hit me with questions and I'm just going to hone that muscle memory. And then once I feel like I'm getting a score that's high enough, I'm going to be ready for the practice test and I'm good. That's all I need and so what ended up happening here is that the personas once we defined the Guidance Seeker path and the self-directed then we went to the personas. So for the diagnostic test and the content of the questions and the content and the practice tests, those are all going to be different. And that's where custom configuration comes into play. So, you still can build customization based on those personas and tests but the overall journey and the experience across all of the tests was going to be the same and it was also going to be largely the same for the two different mindsets. And we were able to curate something very cohesive, consistent, and it reduced so much of our overhead. So, the next thing that we did is that we had to define success first in order to really understand how we were going to design and build. And we worked backwards from that success metric. So, now we're moving away from understanding customer goals to understanding business goals. We first aligned with our leadership on how we would measure success and then again, worked backwards from there. So, this was the sort of set of organizational goals that we were looking at. At every stage of the customer journey we were defining KPIs to help us stay focused. So, at the acquire stage which is where we started, we would measure something like the free to trial conversion. What's the percentage converted from free to trial. What's the percentage converted from trial to paid then we cared about engagement next. So, once we've acquired them, how are we engaging them? So, we decided on repeat visitation as our primary measure of engagement as well as the time it takes for them to engage in that first activity. When we talk about value next, we're talking about how do we make sure that the student is learning the right things and is actually going to get the score that they want on the test? And that's where we work with our data science team. They're called Psychometricians, but essentially that's really important because if we're putting people through this journey and they get from start to finish and they're happy, but then they get a score that has nothing to do with where they wanted to be then we've failed. So, that was another KPI that we needed to measure. And then finally, the delight factor for us was really around NPS. We don't have a subscription model. Our model is a transactional model for people who do have a subscription model these metrics will look very different. Perhaps you're looking at annual revenue per user, retention rate, cancellation rate that kind of thing. But from these top-level goals, what we then did after aligning with leadership is that each team would decide how they could impact the goal, the top-level goal and then they would assign their own team level goals around that. This defining sort of like an OKR system where you're defining the objective at the top level, and then the key results that each team is responsible for that all each of them will be driving towards that top level allows each team to have autonomy. It really helps with the autonomy that product and design really want where leadership can say they can define that goal and then say you're closest to the customer. You're closest to the research, the data, the product the technology, the costs. So, you figure out how we get there, we're just going to tell you where you need to get and what our expectations are of you. And that works really nicely. I think this is a really critical step for creating empowered product and design organizations that also are focused on business goals. In our case, the top-level goal was to increase repeat visitation by 10%, let's say. Product may decide that they're going to use mobile notifications to drive repeat visits and that's what they focus this quarter on. They go off and build notifications that tell a student, "Hey, it's time to practice now. So, coming back to your coursework and do your 10 minutes of daily practice." The UX team may decide that on gamification with streaks and levels will help motivate the user to keep coming back and marketing may decide that it's a new weekly email that shows the user their course progress that helps them coming back. That's a really important step and then finally, once you figured this out at a team level, what you're focused on now, you can really design around making sure that you're hitting these particular KPIs. So, you start with open-ended discovery, but at the end, once you've got your hypothesis and you've designed something, what we did was we validated against the KPIs. So, for example, if we think that streaks are going to motivate then we're going to put this dashboard or this sort of the gamification approach in front of users and we're going to have task-based sort of ways of really understanding, did it motivate them to come back or did it demoralize them? Does the mobile notification? Does it incentivize them to come back or is it just something they're like stop annoying me I'm going to turn you off? And that's where the validation and the testing need to marry up with the objective. And that's how we did it to make sure that everything was quantitatively driven. So, there has been a history of
preference-based design testing, where you put designs in front of students and then ask them, do you like this or what do you think of this color? So, we tried to move away from that and really focus our design around KPIs. Finally, we haven't launched this new experience yet but what I do know is that launching is just the beginning for us. So, many people think of launching as the culmination of months of work and team effort. But I tell my team that it's just the beginning because it's the data that tells us whether we're on the right track. We make a huge number of assumptions even the tests that we run to validate are just proxies for how users might behave. The only true test is putting it in the wild and so at this point we learn; did we get it right? And so, we have this structure which is very much built around the sense and respond methodology that some, some may know which is you build, you measure, you learn, and then you optimize, you have a continuous optimization cycle. So, if And when you're learning, you're really learning against the KPIs because remember you had identified repeat visitation needing to go up 10%. So, if after let's say six months, you're not seeing either a trend in that direction or having met it, then you know that something whether it's the mobile notifications or the emails, something needs to be modified and you need to continue to refine this. And that's where you go into build and we have top level KPIs but then as most people will have story metrics. Those are the actual metrics that help you decipher what's working and what's not. So, we are moving towards having a regular weekly or bi-weekly read out of these story metrics to the team so that we can keep on top of what we think is working and what isn't and sort of have group discussions about where we think some of the friction might be that we could unlock and optimize. And this is where the humbleness of product and design becomes important because we get really personally invested in our solutions. And the problem is if we do it, it binds us to doing the right thing by our customer. So, if we launch in the usage data tells us that self-directed students are abandoning at the diagnostic tests for example. Then yes, that's going to be a huge surprise to me but I'm going to run an AB test and say, okay, if I remove the diagnostic for 50% of students, is this group more likely to get to the next step of the journey and it's okay to be wrong. It's part of the job, it's not okay to ignore the signals from your customers because you think your intuition is more right. So, that is sort of the recap of where we are and where we've been. I think it's a good place to sort of recap and the takeaways I would share with you are segmenting by mindset is a different way of approaching design. And I think it's been really successful for us and it helps you definitely find more commonalities if you have different types of abusers that you're really struggling with, it really helps you focus on jobs to be done for the customer. And then designing against data-driven goals is something that also has been really new for us. It's not easy. It's really not easy to align all these different areas of the organization around particular KPIs. But if you can help your leadership understand the benefit of it and it can start there. It really does help with the organization. You have less friction among teams and it's just much more efficient in creating a customer centric experience and then finally, we're going to launch and learn. That is the exciting part of it all. This is also something that is hard for some organizations to orient around but continuous innovation or continuous optimization is definitely something that I've had to have those conversations with leadership before we started the built to say, "Let's understand that what we build is going to be flawed because again, we're making assumptions and we need to have the conversation now, before we launch to agree that we're going to reserve capacity to do the continuous optimization." And if we don't, then we will be leaving money on the table, essentially in doing a disservice to our customers. That's often a harder one to get other folks outside of product and design wrapped around, but it's a really important one that I would encourage you to advocate for. So, I hope this was helpful to you as you find and define your own approach to building productive experiences for your customers. And if you have any questions, you can feel free to message me through LinkedIn or other means.

Got a Question?

More like this?

Tue, Jun 15, 9:30 PM UTC

Product Evolution: The Journey Of Humanizing Digital Experiences
Mansi Kamdar

Mansi Kamdar

Principal Product Manager - Director, Walmart Labs

Wed, Jun 16, 7:10 PM UTC

Designing a Cohesive, Not Consistent Experience
Richard Dalton

Richard Dalton

Head of Design, Verizon

Raina Mehta

Raina Mehta

Head of Product & User Experience, Kaplan

Mansi Kamdar

Mansi Kamdar

Principal Product Manager - Director, Walmart Labs

Sadia Harper

Sadia Harper

Head of User and Product Strategy, Bright

Wed, Jun 16, 7:40 PM UTC

Digital Transformation
Pete Anderson

Pete Anderson

Chief Product Officer, Transformation Practice, Keyot

Courtney Kissler

Courtney Kissler

Chief Technology Officer, Zulily

Adam Furtado

Adam Furtado

Chief Product Officer, Kessel Run / U.S. Air Force