Are your words working? Creating and sustaining a content-focused research practice

Knowledge / Inspiration

Are your words working? Creating and sustaining a content-focused research practice

Continuous Discovery
UXDX USA 2021
Slides

Your content team needs to be confident which specific words and phrases are resonating with your audience. And wouldn’t it be ideal to learn which words work before you launch your project or product? In this energizing, hands-on session, you’ll learn methods and tools for objectively evaluating how your customers are reacting to your writing—and most importantly, why they’re responding the way they are.
This is not “regular” UX research with prototypes of both content and visual design. It’s content-only research that helps you pinpoint the just-right words for successful, friction-free user experiences and stronger business results.

Hello and welcome to, "Are your words working? Creating, and sustaining a content research practice." I'm Erica Jorgensen and I'm a Senior Content Designer and Manager at Microsoft. I've been there about four years the last year or so on the UX team. I have also worked at startups like Amazon and Rover, and I've taught at the University of Washington's Digital Communications program, specifically on web analytics. Here's what we're going to cover today, why and how to do content focused research. How can I help you identify the exact words that are clearest and resonate the most with your audience? You'll learn how to create business impact with content. Maybe even drum up some additional head count for your team because you're driving so much impact with content and how UX writers and content designers can be inspired by content research, and also how it will build respect. More respect for your content writers because it is so strong. So, what am I talking about when I talk about content focused research? Basically, it's a way to evaluate your content and make sure that it's helping your audience make decisions and take action and if it's not helping them do that. Why not? It's the why not, I think is really where the golden nuggets can be found. It helps you find out what's clear and what's not, which exact phrases to use. And that's sometimes these words and phrases will come out of the very mouths of your customers and that's awesome. They can help you do your work. What I find super powerful about content research is that it helps you optimize content before you launch it. Now it's great to optimize your content after it's published but why not know when you're publishing what's going to work, what would it feel like to feel super competent that you're using the right words? This maybe I need a Doug Henning magic side, but this is exactly the power of content research. It's fairly quick. You can often get results within an hour, depending on which audience you're using for your participants. You can do research with current potential or past customers. And one thing that I think is super important as it helps you check your bias, we're all biased. And when we think, "Oh, this is just right where I noticed where it's going to work." How do you know that? You don't know that and content research will help you check yourself and make sure that you're being as customer centric as possible. And there's a gold here. There's just so many golden nuggets that come out of content research. You'll get insight after insight. You'll learn things that you didn't even think about and I'll provide an example of that in just a minute. You'll learn how to speak your customer's language and that builds trust and loyalty. And I think that's invaluable and you'll drive business impact because you're using the words that are clear to your audience. They're more likely to do what they need to do with your website, with your app. And content research can be done on all sorts of types of content, but mostly I think messaging frameworks, that that's a great thing to test calls to action product and feature names. Those don't get tested nearly enough and definitely at Microsoft I'll speak for myself. I think there's some feature names that could be improved, could have been user tested before they went out of the door and any of the most important words. The words you use all the time, those go-to words that other than the style guide, that's a good word. I would say, "hold it, hold it right there." You don't know the words in your study guide might not have been validated just when I'm trying to say. And so, you can use content research to make sure that those words and your style guide are spot on this is a little hierarchy of what to test. Like I said, the frequently used words those things that you are assuming are clear, test them. The words on your home page, on the landing page of your app like test those test those, those are important. You shouldn't have any jargon on your site but this is here because we all have jargon no matter what industry we work in. Jargon seems to be in escapable but as much as we try to eradicate it but test all these things. This is Microsoft writing style guide, which is available online if you want to check it out. This A-to-Z word list over here I asked myself, how do these words get there? A lot of my smart coworkers in marketing and branding develop some of the words in the word list that we use but I can tell you that some of them haven't been validated. We try our best our brand voices to be warm and clear and ready to lend a helping hand. I think the style guide is awesome, but we can go further. We can become more customer centric by validating the words that are in our style guide. And if you don't have a style guide, you can create one with all validated content, because you can use user testing to make sure that the words in your style guide are perfect. Quickly, what is content research not? It's not a couple of things and I want to clarify these. It's not regular UX research. You might have done. Prototype research with design and visual design and content design together. That research is great. I'm not putting it down, but if you can decouple the words, if you can separate the words from the design and focus on the words, this is what I'm talking about. Sometimes you'll need to include a screen grab, or maybe even a prototype in content research. But the more you can just use the words to ask people, is this word clear to you? What do you think of this? Tell me more about this. That's where you'll really get a lot of value. So, it's not the same as AB experimentation. AB experimentation is good up to a point, I think it's time-consuming it's resource intensive. And I think sometimes UX teams, I'll get on a soapbox here. They get a little bit bent out of shape about statistical significance. It's great. If you run an AB experiment and you reached statistical significance, but do we really have to? Do we really have to be that precise? We're not creating drugs; we're not creating a new COVID vaccine. I think people tend to get overly worked up about AB experimentation and it slows down organizations. I think you can move faster, be more nimble with content testing instead. If you want to do content testing prior to AB experiments, that’s a great way to go. I say, if you have a version A is your control and version B is your test version. Make sure that version B is something that from content research is resonating with your audience. If you're doing multivariate experimentation, make sure those variations that you're testing have been validated by content research. So, what to know before you start. You can use any number of online tools to accomplish online content testing, live user testing, user zoom, even Survey Monkey. You don't need them though. They're great. They're failure affordable. You can also try the trials if you're interested but you can get super scrappy. When I worked at Rover, it was a startup and our funding, we were strapped for cash. We were bootstrapped. And when I wanted to know what words were going to work, we needed to know what worked with our audience because the competition was so stiff. Rover has since bought their competition which is great but I didn't have a staff. I was a content team of one and basically ran downtown Seattle with a clipboard and a bunch of Starbucks gift cards for five bucks each and just ran up to people and said, "Do you have a dog? If you do kind of talk to you about some of our website content." And people would say, "Oh, free latte. Sure. I'll talk to you." But you don't need. You don't need a fancy tool is what I'm saying to do content research. You can do it on a bootstrap. I like to plan. This is a sample test plan. I like to loop in my stakeholders and just kind of get out of my brain. What am I trying to learn? What do I want to accomplish with my user tests? That focus on content and I loop in my product manager, my visual designer, even my software team. I want them to know what I'm doing. So, I share my test plan with them. Sometimes I collaborate with user research on these, if they get really if I think they need to be moderated, but most of the time, if I'm doing an unmoderated content test, I set up myself and user testing and just let my team know what's up. One thing to be careful about when you're doing content research, as you should with all usability research is to be sensitive. If there's content that's already live, someone went through your content designers and UX writers did their best creating the content that's live. Don't be critical of that content. Look at user research as a way to help the team grow to get more customer centric. It's not being critical. You shouldn't be seeing it as a critical thing. It's a way to grow and learn. So, approach it that way. And on the same page, don't think of words as winners or losers. If you run a content test and one word is much preferred over another, don't call that the winner word and the word that was less resonant, the loser word. I think that's a dangerous way to think about it because it kind of puts down content and we don't need that. I'd say look at it from a growth mindset perspective and he'll be all the better for it and keep your tests short. Usually, I aim for about 10 or 15 minutes maximum for the time. It should take a a participant to finish a test longer than that and they'll lose interest and the questions that come at the end of your test. The answers you can tell just their energy will be sapped by the end, and it's just best to keep them short and sweet. So, let's get to an example. This is a marketing page for Microsoft 365 formerly known as Office you might hear me use those terms interchangeably. It's been over a year since we rebranded it but most people are still more familiar with Office and Microsoft 365. Anyway, this is a landing page that people come to when they're buying and when they buy, they need to buy licenses for their users. Usually, people are signing up their whole organization at the same time, so they need to get one license per user. And when I started on the team, I thought licensed, that's a kind of a heavy word. It reminded me of the DMV. It reminded me of waiting in line on a Saturday morning to get my license renewed. That's just the connotation I had in my brain about it. And I thought, what if we tried to swap the word seat in there? Why not seat instead of license? And I think license it's pretty industry standard. I figured a lot of people were familiar with it, but could we lighten it a little bit? It's all over the place on our site. It's all over the place when you're managing Office a license, your license there, could we use a different word that's not only shorter, but maybe a little bit more clear? So, I set up a test in usertesting.com to see if seat was clear or if license was clear. My goal was to just find out which one works better for our audience and that's the important part. This is Microsoft shoppers. They are different than your customers. But anyway, you can target your test to the participants you need. And I focused on 20 small business owners. Small business owner means 10 or fewer employees in Microsoft land. So, keeping in mind that this is our target market. We have other customers, of course, but for the sake of this particular UX work. I was focusing on the very small business or VSP audience. So, you don't need 20. You can test with as few as five people and Nielsen Norman group. They've got a great article on this. A lot of people are like five that's too few. It's not, it's not, I like 20 just because it gives me a whole lot of when I ask people open-ended questions to get a whole lot of answers in there that I find super enlightening but five might be the sweet spot for you. It's also easier to analyze only five user testing participants in feedback. So, testing seat versus license, I set up first question was multiple choice, a quantitative question. Which word would you use to describe the thing that gives you permission to use software? Now, if my eighth grade English teacher read that sentence, she would say your syntax is off. This is a clunky sentence. It is clunky deliberately. I don't want to use the words I'm trying to evaluate in the test question. I am bending over backwards to avoid saying it in that sentence to avoid biasing. My participants' responses. So, this is a clunky question, but it’s useful. And whenever I do a multiple-choice question, I want to leave open leave room for none of the above for something else. I don't want to bias my participants into feeling forced into answering one way or another. I always want to give them the out of none of the above. No, that doesn't work for me. None of them. And then ask them, what word would you use if none of these are working for you. The next question. The follow-up question to the multiple-choice question is an open-ended or qualitative question. Now, qualitative, a lot of people get confused by that. It has nothing to do with quality. It's just don't think of it like that. It's just asking your qualitative questions ask the why. Why people are thinking the way they are. And I like to ask tell me a bit more, tell me a bit more about why you answered the way you did. You can also think of these open-ended questions as the Olivia Newton, John questions. Tell me more, tell me more. I like to combine this quantitative and qualitative approach. I think it's super powerful to ask a what question with a why question. It's really a great way, great way to go. So, this is what we learned from testing seat and license. License was preferable to 17 and 20 people a couple of coworkers told me, I told you so. I said, good for you. Like I had no idea. I really didn't think that license was going to be this preferable. It could be because it's industry standard or people weren't as bothered by license as I was. And that's great to know this is super powerful information. This is a bit of an "aha" moment for me. Maybe check my bias, certainly. And it was the open-ended question that tell me more, tell me why this is where the nuggets of gold come from. So, we ask people tell me why you responded the way you did and people provide additional information that to kind of pull the curtain back on. It showed me basically that people are comfortable to a certain point with the word license, but they could not explain it in their own words very well. They stumbled around it and they provided some feedback that made it super clear that they don't know, not all of them but enough of them, didn't know how many licenses you need. And this was super important to our customer experience. You don't need to get a license for each device. If you have a computer and a phone and tablet, you need only one license, you need one license per user, basically. People were stumbling. People said, "Oh, you need one license for your whole organization." No, that would be a subscription. So, you need a license for each user, but only one license. So, this was an epiphany. People don't know how many licenses are needed. And we were assuming the UX clearly when people are buying a hundred licenses for their organization, that's expensive. We don't want them to buy 200 when they should only have a hundred. That's ridiculous. Then that will also make them think that Microsoft products are more expensive than they really are. We don't want people to over purchase. We want them to be happy we don't want to undermine their confidence and we don't want them to get super confused when they go to set up the product and get their company going with it. We don't want any confusion. We don't want them to have to return to the website to buy more. We don't want them to get their invoice and go, "Wait, wait." We don't want them to over purchase. Worst of all. We don't want them to think to think they don't, what I'm trying to say is many customers didn't realize they were over purchasing which is awful, awful, awful. So, long story short, the UX had to be clarified to make sure that people were buying one license per user. You need one license for each user. Guess what? The designer pushed back on this. She didn't want to add it and we showed her, we brought her along for the ride. We showed her the results from the user testing study that showed people did not know what to do that they didn't know how many licenses to buy. This copy is necessary. This content has to go here to prevent a customer experienced snafu. And it's going to reduce calls and emails and texts to the help desk. It's super, super important. We're looking at quantifying exactly what the business impact is because that's just recently launched, but we know it's going to be, to have your customers. We know it's going to help customers be more loyal and more likely to renew because they're not over purchasing. They're not having trouble when they're doing the setup when they initially buy the product, we know this is a huge, really important collection of six words. That's making a world of difference. And we published the study we have a SharePoint site where we publish all of our user research and there's a special category for content research now because our team is doing so much of it. We can filter on content research. Now this is a report I have published in collaboration with my user researcher, Veronica, who is amazing. I think it's important to collaborate really closely with user research and not step on their toes when doing content research. We did test another batch of batch of words in the same study. And this report has been read by over 250 people across Microsoft, not just from my organization but people around the company, which is a huge company, a hundred thousand people or so everyone has access to the study and I'm thrilled. So, now marketing and branding and other departments outside of UX are referencing this terminology study and it just people keep bringing it up in meetings. Well, let's look at that terminology study. Remember we looked at those words, these are the right words to use. We know because we tested them. So, how do you go about setting something up and use your testing? You can use User Zoom. Don't get me wrong but user testing is what we use at Microsoft. And so, I'm just going to walk you through a sample set up and use your testing. Choose your number of participants. You can do five, you can do 20, you can do a thousand. I think you can do a thousand. I think it actually goes up to 999 if I'm not mistaken. But for each test you run you delineate how many participants you need and what device type you want them to use whether smartphone, tablet, or computer. You can test all over the world that there are people from participating in user testing all over the world which is great and you can even specify which state if you need to target your research accordingly. For that reason, you can use user testing to validate the localization of your content to make sure that the language is clear and that culturally, that it's landing well with your audiences that are in different countries. This is a sample of screener questions set up, so you can see, I want to make sure that I'm getting feedback from people who work at small companies of 10 or fewer people. Those people could get welcomed into the study. And then people who work at bigger companies do not get to take the study and people are compensated by user testing for their participation, by the way. And there a whole bunch of other filters by industry, by job role, by seniority, all sorts of things. So, you can get really, really granular really quickly. The more granular your test is the longer it might take to run but generally I get results within three days. If I've got like a basic type of audience, if you're looking for something super specific a really specific industry, like security professionals with 10 more years’ experience, it's going to take longer to run but I usually allot a week for my user testing studies, just to give myself a little bit of a cushion and create your test questions. This is really drag and drop its super-fast. You can do rating scale in addition to multiple choice for a different quantitative question type. And that's sort of like say on a scale of one to nine where one is not at all well and nine is extremely well, how do you understand the word, blah? These are different scale question types that you can use and you can also customize the end points and it can be like one to five, one to seven, one to nine so you know which end of the scale people are falling on. And this is a sample output from user testing. I call these donut graphs. I don't know if that's accurate but I think of them as donut. Maybe I'm hungry when maybe I really like donuts but I call them donut graphs. And this is not from the seat versus license study but this is one that a coworker ran to show she wants to know what to name a dashboard that was going to be super important to our business. And we didn't want to have to go back and rename that we wanted to make sure that we landed the name of the dashboard super well. And the show that simplified was the word to use by both people who had a lot of experience in both and people who did not have a lot of experience, both audiences preferred that word which was really interesting feedback. If you ask a scale question, this is a sort of bar graph that you'd get as an output. I love putting these graphics in PowerPoint presentations for my leadership team so they can understand this is what we know. This is how we know it, but I don't want to leave out the qualitative responses because they add so much color, so much information. These is a sample output from a qualitative question. There's a lot of content here you can export it to Excel and then go through and highlight where the nuggets of gold are this is where I was uncovering teasing out the fact that people. People were comfortable with the word license, but they couldn't explain it in their own words. They sure couldn't understand how many they need for the organization and that was a big light bulb moment for our organization. So, if you want to build a content research program go for it. I say do it now. You can get your whole team comfortable doing it. I started with a group training program and that didn't land very well. I think maybe because our content team is peanut butter a little bit. We're just we're spread thin and people just didn't have the bandwidth initially to wrap their heads around adding another thing to their plate, adding content research to their very full plates. So, I think an each one, teach one approach is maybe more helpful, at least for our team. Think about how your coworkers learn or the visual learners do they like to learn in groups or on their own something to consider. Creating templates was really key for our teams so we could plug and play. So, we could really get crisp on our thinking about what do we want to learn from this research and make sure that they were focused and we kept them in scope. So, we could we could run more short tests quickly like I said, short and sweet for the win. And this is again, the report that I shared share your results widely, make sure that people are are aware of the testing that it's going on, share it with your feature team, share it more broadly than your feature team. This is a newsletter that we send out. So, at the end of the month, we put in all the research including content research into a newsletter that goes out to the organization. This is the one email that everyone is excited to get every month. People can't wait to open this and dig in and read about it. It's fantastic. And make it a habit. If you can add content research to your sprints add it in for every sprint, make a backlog of what you want to test and then start chipping away at it. We've also added content research to our team's goals so we have so many annual reviews and if people work this into their goals, it's one way to make sure that people are participating and making it part of their day-to-day work and then things take off, people will talk about it. You'll understand people will talk about it wildly and you'll soon be up to your ears and requests for content research which is pretty cool. At least that's what happened at our organization and it's fantastic. We're driving a lot of business impact with it and I wish there were more hours in the day to do more of them. All right. Well, that's my presentation. Thank you for coming.

Got a Question?

More like this?

Wed, Jun 16, 6:25 PM UTC

The Art of Stakeholder Management
Parul Goel

Parul Goel

Director of Product Management, Indeed

Adam Copeland

Adam Copeland

Client Experience Architect , Mayo Clinic

Lauren Chan Lee

Lauren Chan Lee

Product Management Leader | Speaker | Advisor

Reed Jones

Reed Jones

Senior UX Researcher, Zoox

Wed, Jun 16, 9:30 PM UTC

Conducting objective research - Mitigating your hidden biases
Yael Gutman

Yael Gutman

Senior Director, Digital Products, ASCAP