AI in Design and Research: Revolution or Regression?

Debate

AI in Design and Research: Revolution or Regression?

Continuous Research
UXDX EMEA 2024

Should UX designers embrace AI to drive efficiency and stay relevant (AI Enthusiast) or focus on originality and differentiation opportunities outside of AI (Positive Sceptic)? This debate will explore AI's impact on the creative industry and what it means for UX designers in terms of product development and professional growth.

The Combatants:

  • **The Enthusiast: **An advocate for empowering ourselves with AI and embracing it across all processes.
  • **The Sceptic: **A critic of the over-reliance on AI, arguing that it may lead to superficial conclusions and similar outputs across the industry.
    The Format:
  • Each participant will outline their core arguments for or against the use of AI in design and research.
  • The debate will be divided into three rounds, each focusing on a key point of contention:
    • UX Research: Can AI-driven research help build impactful products, or is it designed to give superficial conclusions and data?
    • Creative Design: How much should UX designers embrace AI across the creative process. Will it lead to everyone going down the same road?
    • Job Security: Can AI tools enhance job security by making designers more efficient, or will it make the designer role obsolete?
  • Moderator Q&A: The moderator will pose questions to each debater, allowing them to dive deeper into their positions and address each other's arguments.
  • Closing Statements: Each participant will summarise their key takeaways and offer a final perspective on the future role of AI in UX design and research.
John Cleere

John Cleere, Facilitator,unmake

Kevin Hawkins

Kevin Hawkins, Director, Head of Design & Research,Amenitiz

We are about to have a debate about AI which I'm sure won't get heated. It won't get ridiculous, although one of the participants has already said he will turn to physical violence if it doesn't go his way, so I'm really looking forward to moderating this and going home with a very professional black eye at the end of things.
Here's how this will work: You will be able to see a poll after each of three questions. For each question, we will have 2 minutes from one of our debaters, 2 minutes from the other, and then an aggressive 30 seconds to rebut whatever the second person has said. Then you get to decide who won the question. This is a conference, but with winners, and you get to decide who wins. Three questions - you will vote who has won for each of those, they will do a summary, and then at the end you will get to decide who overall has won the debate.
It gives me great pleasure to welcome to the stage two titans of UXDX. We have Mr. John Cleere to begin with and Mr. Kevin Hawkins.
[AJ]: John, how are we feeling? Kind of scary, isn't it?
[John]: Yeah, a little bit scary.
[AJ]: You limbered up? It's kind of like Terminator 1 really, the movie, where Kevin is The Terminator with all this machine stuff, and then you're like Sarah Connor in between it all.
[John]: That is who I've always wanted to be.
[AJ]: Stay out of it.
[John]: Incredible. And then I'm like the guy who comes back from the future. I cut Kevin off at the path at this inflection point of AI.
[AJ]: And I'm not mistaken, you're going to play Johnny Be Good at the end on a guitar?
[John]: Wrong film, wrong Back to the Future film.

Opening Statements

[John's Introduction]: First of all, I'm not alarmist about this. I'm more like a Cassandra, looking around the corners and seeing what could be coming in the context of design. We call AI artificial intelligence, but I'm kind of wondering is it intelligent, and is it more in line to call it artificial information processing? That could actually make sense.
If it's not actually intelligent, the first thing we should be doing is thinking about us as designers, that there's a bias there already by calling it intelligent. I just found this only a few days ago from a guy called Stefano Quelli, and he said that if we're to be more honest about what it is, we should call it "systematic approaches to learning algorithms and machine inferences" - that's SALAMI. So for the rest of this thing, I'm calling it SALAMI.
Is SALAMI going to take our creative jobs? Is SALAMI going to be creative? I don't really think so. I think it's quite frankly a lot of baloney. What I'm going to suggest to you beautiful lovely people - because you are people and humans, and I need you guys to vote for me - is that this is an ultra-processed product and it could be actually quite carcinogenic when it comes to UX research, to design, and to our jobs as well.
SALAMI is really pushing for efficiency over effectiveness, and when we're talking about efficiency, that's a bad thing. It's exploiting over exploring, pushing technology over psychology and human condition. It's also pushing speed over the speed of human thinking, and speed isn't everything. It's pushing content over context, and we have to bring that back into the C-suite because strategy is really important. A really good quote from Jules Goodard says that strategy is the rare and precious skill of staying one step ahead of the need to be efficient.
[Kevin's Response]: I think this is great that we did this right after the agency talk because I think AI is just like another agency. Whenever an in-house team loses to an agency, there's a big huff and puff online because "oh no, we could have done this in-house." But actually, I think the value of any creative person goes up when there's lots of sloppy competition, and so having AI produce tons of C+ material at high speeds makes us all look better.
If AI is so scary and AI is so bad, how are both of those things true at the same time? People are going to replace us because they're going to use AI? I think we all have heard the adage now - think like 80% of people have said this now - that if you're going to get replaced, there'll be people who do our jobs with AI, not AI by itself. But then there's this belief that at some point it'll learn everything we do.
We all famously know how Figma launched their AI product and everyone immediately freaked out. We said they "Adobe'd" Figma - they're going to take over all of our jobs, charge everyone $7,000 a seat. Then they tried it with Dev mode, but the reality is like if you're going to defend yourself, I'd rather have competition. I don't want to be in a sea of two people that do amazing work because you're going to lose the battle convincing our non-design stakeholders that we are the best option. I'd rather have 15,000 competitors and still be considered the best. I'd rather have it produce tons of UI and tons of ideas that fall flat and fail and have my in-house team and myself look amazing because we are.

Question 1: Can AI-driven research help build impactful products or is it designed to give superficial conclusions and data?

[Kevin]: As you can tell, I'm team AI here. I think it's not just superficial conclusions but I think it can be very, very helpful with the aid of humans. I think there needs to be the same quality bar, the same guidance that we just mentioned in the last talk. No agency is going to come in and do a better job without onboarding, without context, without guidance, without clear goal and definition of success. AI is the same thing.
You can judge AI when you give it no context, no conclusions, no goals, no parameters and say look it doesn't know how to do anything. And then you can give it the proper onboarding, the proper guidance - the same ones you're supposed to give to all your new hires that you don't do. Maybe it actually does a pretty good job, maybe it does help after all. So let's not judge fish on how well they climb trees. Let's maybe give it a lake and give it a race.
[John]: First thing I'll talk about is technology and psychology. I think there might be an over-reliance on us now using technology, and that can overshadow the importance of understanding the psychology in UX research. As an example, we can do A/B tests, and they're saying now that we don't need designers to design the interfaces - the AI can design it and then it can pick the best one and then move on to the next one. It starts basically efficiently coming to a conclusion through UX research, and I'm afraid that actually could be a lot of content without any real context and the why someone might be clicking on something or moving something.
If you think back to early 80s when Steve Jobs came out with the first Mac, he said that it can amplify our abilities. That's true - it should be about their abilities. But he did say as well with Johnny Ive when they brought out the glossy Berry Mac that everyone could have in their house, they actually put a handle on the top of it. He said the handle on the top of it is so you can throw it out the window because the computer is not for doing everything.
SALAMI relies on strict logic. It reduces the opportunity to test and research some of the counterintuitive ideas. Rory Sutherland from Ogilvy said the opposite to a good idea might actually be another good idea. When we look at some great products that come out, it's easy to post-rationalize a good idea, but in our roles in UX, we need to start looking at some of the counterintuitive ideas before they become good ideas. We need to be able to bring that to the C-suite and give them really good ideas of what that should be.
[Kevin's Rebuttal]: I'm going to use your example here for my rebuttal. This handle on a Mac seems random, right? It seems like a really bad idea, it seems maybe like a hallucination of ChatGPT. Tons of good ideas, things that ended up being super cool were originally just really weird ideas that no one understood. I think Johnny Ive is famous for being considered the crazy guy at Apple who kept having new ideas people were like "no one's ever done that, that's bizarre, is that a mistake? You don't really mean that?"
It kind of feels like the first version of ChatGPT - like "oh that's a weird idea, there's no way it intended for that to be the solution." So I don't necessarily think that all ideas that you don't immediately agree with are bad ideas. I think you have to sift through the noise and sift through bad ideas to sometimes get really fabulous ideas. When everyone goes right and you want to go left, maybe you have to try to go left.

Question 2: How much should UX designers embrace AI across the creative process - will it lead to everyone going down the same road?

[John]: We have an impatience with human intelligence when we design the user interface. We say it's great because it's like a shortcut and gets us from A to B. We can get to a final interface very fast, and that's great, but are these actually just like really quick processed microwaveable recipes to get us to some sort of conclusion? Could we actually be nuking some good ideas here?
If you think about the value of the creative process, when you were in college you would have a thesis or you would have done a dissertation. If you look back at that today, you'll read it and go "Good Lord, what was I thinking?" But the thing is that it was the process of going from A to B, that space in between, that was really important in your development. It also meant that you were collaborating with people, you were reading books that you wouldn't regularly read, you were talking to people you wouldn't regularly talk to.
If we look at bees, they've been around 300 million years, and there's two types of bees in a nest. The worker bees follow a waggle dance to go out and get the pollen, and then they come back and they expel less energy than they bring more energy back, and that makes them good working bees. They're very efficient, and if they're not efficient anymore, they're kicked out. But then there's 20% of the bees called scout bees, and they explore and don't follow the waggle dance. They go out and find new areas, new value, to find new pollen, to find new flowers in different areas.
[Kevin]: How many of you remember when you used to have very small teams and you used to have no tools and you used to just do 10 versions of everything? You send a client 20 drafts of a logo, 100 mockups, tons and tons of wireframes, and you trash most of it, and then somehow we still all make websites look like linear.com.
We've done all of this work to get better and better as designers, and there's a website called linear.art which is just a thousand websites that look identical to linear.com because even with all of our user research, competitive analysis, and our senior designers and our agencies, we somehow still came to the same ideas. So if we did it, of course AI sometimes is going to do it, but if we did it and it shipped a thousand times, we could probably catch that much faster if the AI can go and do competitive research in about 30 seconds when we couldn't do it over the last 12 years.
I think there's ways to use AI in a way that makes you a better designer, and it's something that again you shouldn't have overreliance on, but I'd much rather have my ideas be shut down in the first hour of ideation than to spend two weeks doing the drafts, have things be coded just to find out that we built the same thing our competitors built, just to find out that we did the same A/B test that they ran last year and bombed. I'd rather know that now at the speed of AI than to learn that at the speed of me, a human without the ability to scan the entire internet in seconds.
[John's Rebuttal]: The thing is with the as we get more into the exploit area, I think that the AI will take over that a lot more, and that's okay, but the explore area is going to get bigger because we'll need resilience. Explore is about resilience, and we'll need more resilience as the world moves on. There's more peaks and troughs closer together now than there used to be in terms of crashes and different things going on, people losing jobs. So explore is going to get bigger, and we're going to need a lot more exploration for processes, for products, for people, humanity, etc. Exploit's getting smaller because the machines are doing it all.

Question 3: Jobs - Is AI going to steal your job? Can AI tools enhance job security by making designers more efficient or will it make us all obsolete?

[Kevin]: In the last three jobs I've had, I've had to both conduct layoffs and then been laid off - very successful part of my career. I think efficiency is being forced upon us. I think massive jumps in how we are being measured for value and for success are happening whether we get better as designers or not.
If we are going to have that battle - I'm going to use a bit of a war metaphor here - would I rather be the samurai or would I rather be the machine gun? I think AI is the machine gun now. I love a well-crafted thousands of centuries traditional technique taught father to father to son, but when it comes to winning a battle, that culture and that craft and that tradition matter when it scales.
I can be the best designer but the best designer with no one to design for and no paycheck and no income - are they still a designer or are they a craft practitioner looking for "open to look open for work" on LinkedIn? So I think in this battle for keeping our jobs, for being in the game still, you need to pick up the best techniques, the best skills. When something is rushing at you, when someone is trying to break into your house, do you not try to close the door and lock it, or do you just let them in? So I say pick up your tools, learn a new one, and defend yourself.
[John]: We've had 15 years of zero interest rates. Some of you couldn't remember a time before there were no interest rates. That's a huge problem because it meant that VCs had lots of cheap money flying around the place, throwing money at companies all over the place. It died just last year or the year before - all that died. Guess what appeared straight away? AI appeared all of a sudden. So now the VCs are all like "hey this AI stuff, yeah, you got to push for AI." I think it's a bit of a joke to be quite honest.
What happens then is that you get a lot of cost cutting, and we're using AI to do the cost cutting. You have all these awful companies like McKinsey coming out and they're packaging cost cutting as like "this is efficiency and we're going to do these things for your company and save you loads of money" and you get rid of a load of people.
There were two reports out just last month - Notion did a report with the Harvard Business Review and Goldman Sachs did a report as well about jobs into the future and with the knowledge workers. Basically, they both came to the same conclusion that there are problems with the workforce in terms of siloed information, bad practices in terms of the processes that we're using at the moment, and the lack of collaboration. They really push the lack of collaboration and they've pushed for that there needs to be a lot more critical thinking, there has to be a lot more creativity, and there has to be a lot more collaboration.
Even like we were on the tour - was it the walking tour of Dublin a couple of days ago - and I remember we were talking about George Bernard Shaw and he's got a beautiful quote about communication. He said that the problem with communication is the illusion that it had ever happened at all. I think we really have to start going back in and looking at communicating with each other when we have things like AI agents coming into the realm of design as well. Apparently, 2025 is going to be the year of AI agents.
[Kevin's Rebuttal]: Communication is my passion. I've traveled to many countries - I've been to 72, I've lived in eight of them. My parents are like these crazy travel people and they forced this on me very early. I've always realized that the ability to communicate is super super important to the point where I have a tattoo - my biggest tattoo on me is a quote in Latin "scientia es potestas" which is "knowledge is power." And I think AI enables us to have more knowledge.

Final Summaries

[Kevin]: Let's go along with this analogy still. We all like our house and people are trying to attack our house, and I'm going to say we close the door and lock it. Now this does not mean this is going to work forever. I think we're talking about AI taking your jobs - we're not talking about the short game, we're talking about the long game.
When I say that you need to pick up the tools and learn AI and learn new skills, I think we eventually need to figure out what is the difference between good design and amazing design, and I think that's everyone's personal journey. The problem is we often don't look around and see other people doing the same journey