Conducting objective research - Mitigating your hidden biases
Conducting objective research - Mitigating your hidden biases
Collecting and interpreting data to inform product decision making, requires you do so objectively without being influenced by your own emotions and biases. Researchers have to be aware of their internal drivers in order to neutralize them.
During this talk I will share a use case where our research and product team were so overconfident and opinionated in a specific area, we were not open to interpret and appropriately consider the vital information collected from the internal customer support team. The result - the commencement of a project which succeeded in achieving the defined success metrics but completely missed the mark in solving the challenges the internal team originally reported.
During this session I will focus on specific biases to look out for and will share practical strategies to avoid them.
Some of the strategies I will share:
- Research “mindfulness” - practicing self awareness so you are able to identify your internal drivers: regular self check ins before and during projects, writing down your preconceptions and beliefs as well as doing this practice as a team.
- Practicing humility by experiencing being wrong: the A/B test game show where the team “Bets” which A/B test variant will win
- Extending your access to varying points of views by including cross team participants in research think tanks and planning sessions.
Yael Gutman, Senior Director, Digital Products,ASCAP
Hello, everyone. Thank you so much for joining me today at UXDX. My name is Yael Gutman and I've been in the product space for over 20 years leading projects, running user research and advocating for product. And today, I want to share with you a use case that prompted me to identify practical strategies to help brand advocates just as myself collect and interpret data objectively so that we can continue to provide our users with the most meaningful products. Inspiration for this talk today came from internal team of customer service at their organization I was at. This team which are a math report with management, capturing the main reasons users were calling into the call center. It was clear that on a monthly basis, very consistently, there was a specific area on the website that drove most of the calls. Customer service had shared that this was making it difficult for them to serve their audience overall. And so, management had requested that we investigate the area. This was a familiar area to the product team. There was a previous project around a year earlier that was managed by the head of the product team that really focused on increasing the end-user conversion rates. This project was deemed successful by the heads of the product team. And so, we came into the investigation with this pre-existing narrative. We started looking at the user experience. And as always, we found areas for optimization. We also pulled reports. Some of those reports included conversion rates of the end user experience. And those were found to be high very well within the industry standards. And so, we concluded that while the calls were coming in, they more or less were part of just making the business work. And there wasn't any concern with the end user experience. while we did find there was a few optimizations, we did recommend making some updates to the experience. While we did our investigation another team had suggested making updates to the navigation, to the area. So, there were two solutions put forward. One of the user experiences in the area and one the navigation to that specific area. But both of them had shared the same measure of success, which was increasing the end-user conversion rates. Just by the definition of the measure of success. It's scared that both teams were focusing on the end user and not on the reporting team, which was customer service. There was something at that inhibited both teams in really listening to the reporting team and we'll touch upon that as we proceed. Both solutions were put forward to development both launched. And we did a post launch report for both and both were successful in achieving the defiance KPI of increasing the end-user conversion rates. So, after we had reported this out to management and as the weeks went by, it was clear that while the end user conversion rates went up, we were unsuccessful in actually making any change to the reported issue. There was no change in the number of calls that were coming in to the ratio of calls. And there was no real change in the types of calls that were coming in specifically for this area. In retrospect, there were three main reasons that contributed to our inability to look at the challenge customer service was bringing forward. First, the product team was suffering from arrogance. We had a preexisting belief that the end user experience was satisfactory. We use the quantitative data to validate that and for that reason we thought that the customer service team was reading the situation wrong. The second, and this is really hard to say we didn't treat the customer service team as a user group, although they had reported the challenge to their own workflow. Our discovery, our KPI and our solution was really focused on the end user experience, really minimizing and ignoring customer service. And the third thing is that the two teams that proposed the two solutions, there was a lot of tension between those teams and that we didn't realize that at the time didn't allow us to collaborate and really leverage the experience and the knowledge between the teams. All these are in clear violation of the user research, cornerstone of empathy, the key verbs in the definition of empathy, are understanding and sharing. And we clearly did not take the time to understand the impact that these calls were having on the customer service, workflow and day to day operations. We didn't investigate and we didn't understand the pain points. And only when you understand those pain points and the experience that the user group is actually experiencing only if you share those, you can actually come up with a solution to help resolve there's issues and that is where we completely failed. This realization was a huge blow for me and others on the team. Being empathetic is part of our professional persona. So, to understand that it was our own internal emotions and unchecked biases that led us to a solution that was completely off the mark was a very hard pill to swallow. We were lacking the objectivity that would have led us to a more meaningful solution. Many times, we underestimate how our emotions, general drivers and biases influence our attitudes and behaviors and the direct impact this can have on doing user research and interpreting data. So, with that in mind, I want to share some biases and some strategies to avoid them. The first bias I want to talk about is confirmation bias and this rears its head in user research. When you gravitate towards specific responses and towards specific qualitative data that fits into your preexisting narrative and that is exactly what we did in this specific experience. We had the idea that this specific area was already dealt with in the previous project, we did not believe there was an end-user experience. And so, we use the quantitative data to validate that the end user experience was good. And this allowed us to minimize and ignore the customer service team. I bought in confirmation bias is made easier. When you have a wide range of point of views within your own team to what the narrow perspective, we exhibited in the example I shared. We have since expanded the user research ideation sessions to include representatives of other teams. We now have representatives of customer service of the technical team and of QA. This has allowed us a greater understanding of our business in general and for our users specifically, another benefit is that we have increased the breadth of testing ideas as we have much more to learn from. Another thing that we do is that we leverage the analysis partners we have outside of their organization. When we have an area of interest, we shared that with these partners and we solicit their from their experiences with other clients across a wide range of industries what testing ideas were had and what they learned. This not only improves again, our research capabilities but continually puts us in a place where we have to listen to others and learn from them, which is key for being in research. Very important also is to employ technologies that force objectivity active listening is one of those. The goal of effective listening is to capture the feedback that others have given you without adding any commentary to do so we employ listening more and talking less. Allow for this negative space where the users you're speaking with need the time to gather their thoughts. Don't feel the need to fill in this space with words. One thing that I do constantly is mirror what I'm hearing. So, after I think I've heard the point that the user is sharing with me, I summarize it and then I mirror that. Many times, the user will say, "Yeah, that's exactly what I meant." But many times, they will also say, well, that's sort of what I meant, but this is what really, I'm trying to say. So, this is really important in order to make sure that you have captured the essence of what the user is telling you. And the last thing is active listening is involving more than just involves all your senses. So, it's listening. It's also observing how the user is interacting with you during the session. There are a lot of cues, behavioral cues that could give you a lot of contexts to what is being said versus what they really mean and how they actually behave. Using quantitative and qualitative data is key in order to have actual insights. The quantitative data is what the users are doing and the qualitative is why they're doing it. And only when you really marry the two, do you have the full picture. In addition, if there is a discrepancy between the qualitative and the qualitative that could many times give you a clue that you actually have a bias. In our example, there was a discrepancy. There was a discrepancy between what customer service was telling us and between the quantitative data that we decided to use to prove that the end user experience was satisfactory. Had we taken a closer look we could have understood that we were actually gravitating towards that piece of data and ignoring what customer service was telling us. Communicating impartially is key for research so that whenever you collect information, you do so without influencing the subjects that you are actually speaking with. We spent a lot of time on the script for user tests, for user interviews, for surveys. And we do so, so that we are asking the questions and managing the interactions in a way that doesn't influence the responses that our subjects are giving us. Once we are happy with the script, we actually give it to some peer review. For peers, colleagues that are outside of the specific project. Many times, those peers that are not as close and having seen the script for so long, as much as you have, and they will be able to find even more areas that you should find to make sure that you are presenting yourself and communicating objectively. The next bias that I want to talk about is Implicit Bias. In popular culture, this is stereotyping. And in user research, this means that you are coming to the table with preconceived notions and generalizations about the users you are speaking with or the groups you are engaging with. And in our example, we definitely had an implicit bias towards the customer service team. While we did understand the role in their organization, we overall underestimated their understanding of the website and didn't value their perspective. And for that reason, we would dismissive of them. In order to avoid that from happening again, we have since instituted periodic sync ups with the customer service team, this is not with the management of the team but with the actual reps, this has proven invaluable. They are on the front lines with our customers and have a lot of information that is key for us in order to do our jobs much better. So, we share with them what our ideas are for projects as well as for testing. And they have been able to share back feedback regarding what's a really good idea and some enhancements for ideas. We've also been able cook to collect from them additional challenges from what we've heard when we were just speaking with their management. So, definitely very much recommend identifying any user groups within your organization that have that direct contact with your users and leveraging them for you or needs in order to avoid your own biases. You first need to be aware of them. Only if you are aware, can you put in place tools and methodologies to neutralize them? So, I suggest that whenever you start a project, whether it'd be discovery or research. First, check in with yourself and be honest, figure out if you have any preconceptions, if you have any defensiveness, if you have any ego that's related, either to the project you're looking into or to the user groups, you're going to engage with. Very helpful to write these things down. There's something about putting pen to paper that makes it real. It makes you accountable for it. It's also useful to do this as a group. If your whole team writes down what their own perceptions are, it's then and you share then. It's then easier with the buddy system to sort of be accountable and help each other out and point out are you actually going down a road that's based on your bias. I believe that had we done this within the example that I shared, we would have noticed that we definitely had these preconceptions and hopefully we would have been able to take a different path. Being self-aware requires intent on a regular basis and so whatever cadence works for you should do that. I do it at the beginning of each project discovery or research and throughout the project itself, I go back to my notes to make sure that I'm continually aware of what my initial beliefs were and continually avoiding them. I know some people do this on a daily basis. Some do it as a meditation, five minutes a day, whatever it is for you. I suggest you stop that practice. Humility is key to being open and receptive to others and what they are sharing with us. It's important that we remember that our opinions are just opinions. They're not facts and they are not more valid than anybody else's just because we hold them. Once you are experienced, it's easy sometimes to get into the trappings of believing your own opinions and the notions that you already have in your head and so it's even more important to practice humility. A fun way to do so is to experience being wrong from time to time. So, one thing that we do in our organization is that we bet on which testing variant will win and by how much. This ensures that all the team members will be wrong from time to time. And it normalizes being wrong and that wrong is not a bad word. It just means your hypothesis was incorrect and it was now proved otherwise and that's really important that we continually put in place the ability to remind ourselves that it's not about being right. It's about proving hypothesis and learning from it. Another thing that we do is that we celebrate learnings formerly known as failures. Whenever we have a post-launch report or we have other results of a test that we did. We shared that we socialize this within the team as well as with other teams that are related to that specific area. And when we share the results of something that didn't go as expected, we make sure that we emphasize how we put it is that we did. For example, the test we voted costs. That we would have incurred had we moved to development without making sure that the impact would have been what we expected. And we also, when we share things that didn't go as expected, we acknowledge the importance of the hypothesis itself and how refreshing it is and how humbling and how inputted it is for a researcher for a product person to, to get down to earth and understand that we don't hold the answers. We're actually looking for the answers. The strategies that I spoke of until now were really related to the workspace. Now, I want to talk about something that's more holistic which is continued learning. Continued learning lends itself to research because you put yourself out there, you must learn from others. You must collect information you must be able to be open to what the others' experiences are. And so, there's multiple ways of doing this. You can learn a new skill that's related to your work or unrelated to your work and just by doing so you put yourself in the student's seat and that is super helpful in to force yourself to listen and to learn from others. I've recently learned a new skill which is furniture refurbishing something I've never done. I've had some failures but I'm learning new skills and new techniques and very happy with in that sense that I have been able to achieve. Another thing is reading and traveling this by default opens you up to a new world of concepts, environments, people, cultures, and this just lends itself so naturally to then being able to engage with people inside the user research world, and be able to listen to them and gauge what they're saying to you. To summarize it's important that we're able to identify our internal drivers and our biases in order to be able to put them at bay and make sure that they don't influence us and don't blame us. For this, we have to put in place self-awareness techniques to be able to identify what those drivers are so that we are able to put in place tools to neutralize them. It's important that we continue to practice humility so that we remember that our opinions are just opinions. And we are here to actually learn and validate processes and not prove them right to help with this. It's important that we create an inclusive mindset where we have access to a wide range of point of views. It's important that we put in place tool sets that force subjectivity active listening to make sure that we collect information without interpreting it with our own preconceptions that we use qualitative and quantitative data in order to see the full picture. And also, in order to be able to understand if we have any preconceptions and biases. Now, we can communicate impartially with our users. And so, this is so that we don't influence them with our perceptions and that we are able to allow them to respond the way that is natural to them without any impact of our own beliefs. And lastly continued learning this is just so we are as people. In a place where we continually are open and receptive to others and this naturally will help us with our day-to-day roles. Thank you so much for joining me today and enjoy the rest of UXDX.
Got a Question?
More like this?
Tue, Jun 15, 9:30 PM UTCExperts at Scale: Solving Problems with Process, Professionalism, and Politics
Manager UX Research
UX Research Program Manager
Wed, Jun 16, 6:25 PM UTCThe Art of Stakeholder Management
Director of Product Management, Indeed
Client Experience Architect , Mayo Clinic
Lauren Chan Lee
Product Management Leader | Speaker | Advisor
Senior UX Researcher, Zoox