Continuous Research: The Qualitative Approach Of Preply

Knowledge / Inspiration

Continuous Research: The Qualitative Approach Of Preply

Continuous Discovery
UXDX Europe 2020

Up-front research is important but the user needs evolve and change over time. In this talk, Kate will describe the continuous user research methods used within Preply and how they take action based on the results of the research.

  • How do you ensure constant learning and understanding your users needs
  • How to validate your results
  • How to get full team buy in and increase customer understanding

Today, the name of my session is UX Research that makes difference.
Let's start with talking about how topical this direction is at the moment. Since 1980s to 2017, the UX Profession grew from 1000 specialists to 1 million in the world. That's a bold statement, huh? But not really because Nielsen Norman group, the group will follow our UX Research direction that says that their projection to 2050s is that this profession will grow even to 100 million. I know it's hard to believe probably it's not true but still it shows the linear growth and it shows the importance of learning our customers, talking to them and receiving constant feedback especially in the period of COVID-19 because we cannot just trust the quantitative data. We need to gather user opinion and to understand their behavior.
So, who am I? Again, my name is Kateryna and I'm the Head of User Research at and Preply is a global marketplace of online tutoring. So, in case you want to learn any language in the world, you can go here and choose a tutor that you like and to have classes with them on our video platform.
Let me tell you some brief introduction about what was going on when I joined Preply. It was three years ago to kind of understand how we went to having different research methods working in our company. So, what was happening is that our product squad we're developing different types of features and we were calling these features as Key Rocks. So, as key rock, you understand that it will be a huge feature that will be developed for around one quarter. It will be less solid functionality so it was really time consuming to develop those and also irrespectively as it took lots of time to develop them, it was pure pricing. The ideation process was the following. So, all our product teams were brainstorming inside the team what they think would be topical for our customers. So, they would not kind of solve user pings, but rather go with satisfying the ideas that they had in their mind and why did I picked this emoji because in the end the product became more or less like Frankenstein-ish so there were some functionalities that had low adoption rates some functionalities that were not easy to use and in the end, we were reflecting back and thinking, "Okay, what's going on? We're spending lots of time, lots of money and in the end having not very likable product. What can we change?"
So, what did we do is we've first adopted A/B Testing Framework. It's a continuous experimentation framework that is targeting speed of launch. Our goal benchmark is having one A/B test launch a day. We're not near that, but we are going to the direction. Also, we've added the human-centered-design framework which means that we need to involve users throughout the whole process of product development. We've started with the researching, who else or customers are, what needs they have, how we can satisfy them. We were showing them our early prototypes, reflecting on the feedback we were gathering, change in the product and this showed pretty interesting results since the A/B testing and human centered design framework was adopted. Our conversion rate from leads to payment has grown by 17.9% and we, in the company route to ask year over year before the lockdowns started. Of course, it sounds pretty fascinating.
So, after just adopting to different frameworks everything changed. No, of course it was a complicated process and what helped is that we've launched on the high level, on the company level. OlR, for all PMs and they committed to take part to own customer development, right? So, to take part in different types of research. So, for example, here's a screenshot from our software that we used to track our OKRs that says improve customer development in a CRO team which is a Conversion Rate Optimization team and the key results was to speak to at least five customers at the very beginning. It actually improved the engagement in our company and our PMs started talking to our users more and they were more engaged in acting on the insights that we were saving.
As we've kind of reflected on how we have adopted all these frameworks in the company, we've started working on structuring all the research methods that we can use for solving different types of problems and here I've tried to show all like common problems that we solve with the help of research at Preply. These are validation of product ideas, identification of market opportunities, uncovering new product ideas, understanding quantitative data, detecting design and copy issues and development of user-friendly experience.
Below the statements, I've tried to listen to different types of research that we try to use to solve this particular problem and in order for you to choose the best type of research or solve these problems, I suggest you to answer these questions. They will help you to understand which one is the best one. What we usually do is we discuss with our stakeholders, what purpose do we chase? What do we want to solve with this research? Maybe we can move even without the research. What expectations do they have? What type of data do they want to receive in the very end? What known data do we already have? And main questions, which is really important to not make our study very broad because not having a focus is not a good idea and we loose all the assumptions and hypothesis.
Here in this slide, I've tried to kind of show different types of research depending on whether they are qualitative or quantitative and whether we are observing what people do versus what people say. In this session, basically what I will do is I will give you a specific example from our experience of using each of these research methods.
To start off with, I will give you some details and example all fusing user interviews. So, user interviews can be used in cases display on the left side of the slide and the first example was related to our vocabulary feature. So, our team had a hypothesis that we can develop a vocabulary feature in our mobile app in the way of flashcards. We were 100% sure that it will be successful but as we understand it's not a one-day A/B test. We need to minimize all our risks and to define whether this pain actually exists and whether flashcards would be the best solution. So, we've talked to her we're users with the students and tutors and realized that pain actually exists but flashcards were not useful according to their previous experience and they've shared that the most efficient way of learning your words was to complete tasks with a real context. So, this type of research was ideal for this pain, which is validation of product idea because. It provides lots of insights about whether the pain exists or not, how the current piece satisfies this pain. Maybe they will lose to the competitors that you can refer to as a bend in your benchmarking studies and in the end, allow you to minimize all the risks and move forward with lots of ideas. So, this is our experience of using in depth interviews.
And let's move to the next type of research which is Field Studies. Now, there are lots of different names for this kind of research contextual inquiry, observation, shadowing and many, many more but let's not focus on this part. Let's focus on the real examples so you would understand the value of using this type of research. Our recent example was of using field studies on our search page. You see it here. We have filters over here. We have tutor cards over here and what we wanted to have here is to have just ideas for our next quarter on what A/B tests we can launch to improve a NorthStar set of the team that is working on the search page. So, what we've done is we've been observing how our potential customers are interacting with the search page and our main goal was to kind of note their problems if they face them some work arounds or maybe some interesting user behavior and then the very end, what we've noticed is that indeed there are different problems, different interesting user behavior. For example, I will return back and show you. We have the main line of the filters here and subline and students were not noticing this one at all and also what they were doing they were clicking on the tutor cards here and when you click on the tutor cards, you are transferred to the tutor page and they were not expecting to see this.
So, in the very end it was pretty useful to launch field study for this purpose, for the ideation purpose because it led us to a minimum of seven already released A/B tests and we have even more in our backlog. So, the benefits are the following. It's very insightful you don't need to have lots of observation sessions to have these insights and it provides more viable results. Why so? Because in research it's considered to trust the data that is perceived from observations more reliable than data from interviews because people provide you more while viable insights when they do something rather than when they say something.
This is it about field studies and let's move to Usability Testings. I don't know if you know, maybe you know, maybe you don't know. There are two types mainly. Two types of usability testings’, qualitative and quantitative. For they're called formative and summative. We don't usually launch usability testing for everything that we produce at Preply because as I've mentioned before we are trying to move very fast and to grow fast. Therefore, in this picture I've tried to show you how do we filter in, how do we try to understand whether to launch the test or no. So, we do usability testing only in the cases when it's critical to get something right or it's complex to make well. Just simple A/B tests are not checked with the help of usability testing in our case.
Now, let's move to the formative usability testing. Formative usability testing is a qualitative type of testing where you just give predefined tasks and observe how a person interacts with the product and try to define some problems or issues that should be fixed. Before you launch something in production and on the right part of this slide, you can see them screenshots of our video platform and what we were trying to do with this formative usability testing. Our goal was to improve the adoption rate of our video platform. It was pretty low. Our target was the adoption rate of 80% and it was 30 by the time. So, our hypothesis was that there are some problems that prevent users from constant using of this video platform. So, what we've done is we've conducted this formative usability testing and we indeed export main inconveniences and these were that for example, students didn't know how to use minimized window option and for them it was really crucial. What are the benefits of using this type of method for this purpose? Is that it's very fast. You need only five respondents to have a viable result about the main problems that your respondents have and as I mentioned before, when you observe what people do rather than what they say. These insights are more reliable and now let's move to summative usability testing. The quantitative usability testing. In our case, what we wanted to do with this type of research is we wanted to choose among two different prototypes, which one will perform better in terms of his ability and to release it on production because human centered design framework says the more prototypes you have, the better your user experience you will build and the real example in Preply was related to our product called Weekly Lesson Booking Schedule Option and we wanted to launch it on our mobile application.
As I've mentioned before, the goal was to choose the best prototype among two. Our hypothesis was that one of two prototypes will perform better and what we've done, we've launched at this summative usability testing and we usually use Maze software for this purpose and it's very easy. You just upload the prototype, you run it with your sample and in the end, this software just generates you the automatic report and it's really easy to read and to make respective decisions. What results were received in the end that the prototype B indeed performed better comparing to prototype A in terms of main usability metrics which is effectiveness and satisfaction and only the efficiency which is time taken for performing a task was a bit bigger than we expected, but it wasn't a crucial for this feature so we decided to make a decision and released prototype B. So, what are the benefits of launching summative usability tests? They're very easy to launch. You don't need to moderate the test and to create user scenarios and ask respondents to do this or that. A software will do everything for you and to generate reports for you and you will also know how to move forward because they give some tips and tricks on how to improve software.
The next type of research is session recordings and heat maps that we often at Preply with the help of software called Hotjar. I think most of you are familiar with it but just kind of to reiterate why it's useful. So, in our case of using a session recordings and heat maps at Preply, the recent one was related to as you see on the right side of this slide calendar and we wanted to understand whether it's easy for tutors to set the availability over there because we first started to receive lots of complaints in our customers support that's our calendar is not easy to use and we wanted to check whether oldest complaints or just the complaints from the complainers or indeed the problem exists. With this goal, we've launched Hotjar session on the availability page and the results were the following that the hypothesis that UI is not too comfortable was approved. We've spotted different patterns of setting the availability. So, our initial calendar worked in a way of dragging and dropping different timeslots and tutors wanted to either drag and drop or to click and add manually copy paste time slots. So, it was a nightmare. They didn't know how to use the functionality properly and so we knew what to do next, how to improve this UI. So, what are the benefits of using Hotjar? It's fast to check whether these complaints are complaints or not, but it's something says behind them but the disadvantage here is that you have some questions unanswered because when you observe some user behavior, you might not be sure why the user performs these or that actions. You cannot ask questions because you were just observing their screen. So, for such kind of purpose, if you want to ask more questions, better to use formative usability testing.
There's also search log analysis, very fast type of research. In case you have some search, engine incorporated to your website, you just can analyze a search inquiry and make sure you have the content on your website that is requested by your users. And, how we use it at Preply is we have a product called Preply Library. Our methodologist just creates different types of content for our students to learn the language and we wanted to make sure we have everything that is needed to our customers that they're searching for. We just started an alliance with the search logs and the results were the following, our hypothesis was that there are some gaps that's we didn't fill and we've noticed that there are some topics that didn't produce any results and these were some grammar topics were topics about the accent reduction and we just added to the backlog of this topics. Now, our methodologists' department is working on creating this piece of content. So, the benefits of search logs are obvious. It's quick and cheap and easy methods to do, to use and also why it's useful. It provides the user jargon because you'll literally see what they type in and you know how to promote these topics further in their own language.
Now, let's move to the final section which is quantitative part of user research. We can start with surveys because of course this is the most frequently used type of research. Everyone knows it. You either created one or participated in one in our recent example of using surveys at Preply was when we wanted to make a decision about how to move with our A/B test. We've redesigned our My Messages on the backside and it showed okay-ish results but we wanted to make sure that we can expose it to 100% of users and that they will not have any complaints or suggestions in how to improve it. So, we just followed up with the satisfaction survey with the 5 Scale Question, "Please evaluate your experience with My New Messages, page." and what we've observed that our respondents chose 4.2 on average as a satisfaction rate on this page and it was a very good estimation. So, we decided to scale this experiment. However, there was the open-ended question that followed up this question asking, "What we can do to make it five for you?" There was an obvious pattern that we need to add search function. So, they would be able to search their students by name and other authors. So, we've decided just to write it all and to launch it as a separate A/B test. So, why is survey was the best type of research for solving such kind of problem? The problem will remind you to make a decision how to act on our A/B test. The benefits are the following: it's quick to launch, you have statistically significant data, so you are kind of backed up with numbers when you make any decision about how to proceed with your A/B test. However, in case you add in the end quality open-ended type of questions, you have qualitative research part over there. So, we need to spend some time to analyze it and to detect some patterns. In case you have this time, do this research and it will point you the direction.
The next type of research. It's like a subdivision of surveys, comprehension survey. Very cool and very fast method to check your hypothesis regarding UX copy or content issues. So, our recent example of using comprehension survey perhaps it was fun. We had the idea of creating over here, this is a screenshot of our video platform in this section, a message to the tutor. You have X hour schedule with this particular student, not just X hour scheduled with your student. And what was the hypothesis behind this idea that this tutor will notice this sentence and will match to the student in case there will be zero. Our schedule to purchase more or lessons with them, or to schedule more lessons with them. But do we want it to make sure they actually understand what we want them to do. So, the behavior of our users will be the one that we actually want. So we decided to launch the comprehension survey and to make sure they understand our UX copy. How our comprehension survey looked like. There was only one question asking, how do you understand this specific phrase, ‘X hour scheduled in the next seven days?’ The results showed that the majority of tutors did understand what they say UX copy meant but they didn't understand what each student relates to. What we've done is we've just added the avatar of the student and the tutor knew what to do. The benefits of launching UX comprehension surveys, it's very fast. It's very easy for copywriters for researchers to analyze any issues with UX copy and to speak user language in the very end. Right? However, sometimes it's better to incorporate this type of question in your formative usability testing because it's hard to predict whether they will be on the same page and whether they will be in the context of using this flow so there might be some issues with this. Just keep in mind.
And our next type of research related to more UX copy and content part of the product is highlighter test. Very interesting type of research where you just simply ask your respondents to highlight in green and red, those parts of the content that make them feel confident or not confident and how we've used this type of research at Preply. We have our German version of the website and our localization team just recently started working on this and we wanted to make sure we are moving in the right direction. So, our hypothesis was we've just kind of started working on this. There might be some issues maybe we can skip some cultural things and/or we cannot be very aligned with our brand identity. So, we need to make sure that we are moving in the right direction. So, we've launched this highlighter test and the results were the following. Everything was pretty okay but just some part of the content on the speech caused uncertainty in German customers. And we also ask one qualitative type of question, "Why are your highlights in this part in red?" And there were commenting. So, we also knew the users are going and how to rewrite this part of our product. So the benefits are obvious, you know how to make your content speaking user language, how to create user behavior you actually want to have on your website and in case you don't want to add additional software's for conducting different types of research, it can be done really simply. For example, in a Google Word, you can just paste the tasks, ask them to share the screen and do the highlighting over there. So, it's really easy.
The final type of research also to work with content and UX copy called Cloze Test. So, when it can be used, when you want to also work with your UX content as I said, in our particular case, we were using it for estimating our localization of words. So, on this screen, you see the part of our Russian version of the website and our hypothesis was that the main words that participate in our key messaging may havesome different connotations and that they might be also not culturally fit. So, I wanted to make sure our Russian look a lot at localization managers are working in the right direction. So, we've launched this test and their results were the following. We've learned that some boards have negative connotation in our students' mind. For example, the word teacher, it was associated with the strict teacher at school rather when we say tutor, it's more like a flexible, it's a friend that provides a positive emotion and after that we understood that we need to just change this word and our key messaging framework for them. So, the benefits are the following its quick and insightful tool for localization assessment. Just like the main disadvantage here is that you need to actually prepare this task with the cards, for someone it might be complicated to do on the fly because you need to get the respondents, so you need to do it online. So, if you have time to kind of do it, organize it is all of this, it's a good tool to use for assessing effectiveness of your localization of words.
To sum up the different types of research that I just introduced to you you. Here, they're on this matrix and whenever you want to decide what type of research to use, just use this matrix and think: what do I need to have? Some opinions of people whether some observations of what they do, maybe what they say, do I need something backed up with data or just do I need to cover main problems and this will help choose the right type of user research method for solving your problems.
How all this changed our product really is really easy. Now, all our hypothesis, our test really fast. We have knowledges, we have insights to iterate and I want to think that our product became more likable and easier to use to our respondents.
The key takeaways from this session is that involve users in your product development. In case you don't have lots of resources, do it from time to time at least. Involve stakeholders in the process, it will boost engagement and chose the right type of research to make sure you have answers to your main questions.
It was pleasure to share all of this and in case you have some questions, I'll answer you.

Got a Question?

More like this?

Mon, Oct 05, 7:00 PM UTC

Roadmap To Establishing A User Research Program
Yael Gutman

Yael Gutman

Senior Director, Digital Products, ASCAP

Mon, Oct 05, 7:00 PM UTC

How BMW Motorrad Integrates Fans into their Digital Product Development
Jann Kirchhoff

Jann Kirchhoff

Product Success Manager of ConnectedRide Apps & Digital Services, BMW

Jan Bode

Jan Bode

International Community & Crowd Manager, Testbirds