From Lip-Service AI to AI-Empowered Experiences: A Customer-Centric UX Framework

From Lip-Service AI to AI-Empowered Experiences: A Customer-Centric UX Framework

“You do not need to know everything in order to speak and innovate in AI. You just need to know enough.”

When Muriel walked on stage at UXDX USA 2025, she did not start with model architectures or buzzwords. She started with anxiety. The kind many designers and product people quietly carry when AI enters the conversation. The belief that unless you understand every acronym and every algorithm, you have no right to lead.

Then she dropped the line that set the tone for the rest of her talk. You do not need to know everything to innovate in AI; you just need to know enough. Enough to ask better questions, enough to understand the trade-offs, enough to collaborate with engineers and data teams without feeling like an outsider.

That balance of honesty and practicality defined the session. Muriel was not interested in glorifying AI. She wanted to show how to move from lip service and noise to experiences that genuinely help people.

The Myth of “We Need AI”

Muriel began by asking a simple question. Who here has heard some version of the phrase, “We need AI, can you do AI for us?” Hands went up across the room. Then she added the punchline. Hearing that, she said, is like someone saying, “We need the internet, can you do internet really well?”

The room laughed, but the discomfort underneath was real. For the last few years, teams in every industry have been pushed to add AI to roadmaps without a clear sense of why. AI becomes a mandate rather than a method. A label to stick on a slide rather than a lens to solve problems.

Muriel sees the consequence of this thinking everywhere. Treating AI as a goal in itself leads to half-baked pilots, confusing interfaces and products that work worse than they did before. It leads to what she calls money dumpsters, places where budget goes in, and value never comes back. Most dangerously, it leads to broken trust. Trust, she argued, is the hardest thing to win and the easiest thing to lose. You cannot fully quantify it with a neat metric, yet you feel its absence immediately. Once people feel an experience is careless, misleading, or indifferent to their needs, they do not just question the feature; they question the company behind it. Muriel pointed to public examples that reveal how fragile trust can be. The AI-generated poster for A24’s film Civil War, filled with imagery that never appears in the film, including the now-famous swans, became a symbol of that gap between promise and reality. The audience went looking for swans that did not exist, and the online conversation shifted from excitement to scepticism.

McDonald’s AI-powered drive-through, with its misheard orders and surreal substitutions, sparked waves of viral videos. On one level, it was humorous, but for the people stuck in cars dealing with repeated failures, it was a very real breakdown in service. A system meant to reduce friction had become a source of it.

For Muriel, these are not simply PR mishaps; they are reminders that AI deployed without a clear problem and without careful design erodes the very thing companies claim to value most, customer trust.

Where AI Quietly Works

Despite the criticism, Muriel is not cynical about AI. She is careful. Some of the most impactful AI, she reminded the audience, never appears in a press release. It simply makes life easier.

She highlighted how Deutsche Telekom used AI to reduce internal search time for employees. Legal, HR and finance teams spent minutes at a time hunting through disparate systems for answers. By applying AI to that internal problem, search time was reduced from around two minutes to about eighteen seconds. No glossy interface, no generative spectacle, just a steady saving in time and money that compounds daily. Target took a similar path when it implemented AI-driven tools to help in-store staff find items more quickly. The impact showed up in shorter waits, smoother interactions and better onboarding for new employees. Shoppers just experienced a system that felt more responsive. There was no AI badge on the interface, because there did not need to be.

These examples mattered because they grounded the conversation. AI was not the hero. It was a supporting actor, in service of a clear and human goal. That, Muriel argued, is where AI belongs most of the time.

Learning Enough to Lead

One of the most relatable parts of Muriel’s talk came when she described an early role at Sisense, a big data analytics company, where she was the only designer. The CTO, a brilliant but demanding personality, gave her a challenge. She had one month to design a way for anyone, even her mum, to create a data dashboard in under a minute. At the time, even experts needed twenty minutes to an hour.

She knew she was not going to become a data scientist in four weeks. Instead, she chose to learn just enough about how the system worked, how people thought about data and where they got stuck. That approach allowed her to ask precise questions, push for the right constraints and design something that unlocked new behaviour. The lesson stayed with her. Designers and product leaders do not need to be AI specialists to shape AI strategy. They need to understand enough of the language and constraints to collaborate effectively. Enough to sit in a room with Python engineers and data architects and hold a meaningful conversation rather than waiting to be told what is possible.

Muriel encouraged the audience to be proactive about this. Set up a short call with a backend or data engineer. Ask them to explain how the company’s data is structured, where it is strong, and where it is messy. Use plain language. Ask them to speak as if explaining it to a five-year-old. The goal is not expertise. The goal is shared language.

When teams share language, they can share ownership.

Choosing the Right “What”

If there was a single thread running through Muriel’s talk, it was the importance of choosing the right “what” before thinking about the “how”.

Too often, she sees the reverse. Leaders state, “We need AI,” and teams scramble to attach it to anything that feels visible or glamorous. This leads to chatbots nobody wants, recommendation engines that ignore context and features that only add noise. Muriel argued that the real work is finding the problems that are both significant and repetitive. The places where people spend too much time, feel too much frustration or repeat the same actions again and again. These are the places where AI can quietly transform experiences. She urged designers to talk to support teams, who act as early warning systems for broken journeys. She suggested watching how users fill out forms, noting where inputs are repeated across sessions and users. If most people are entering the same values over and over, that is a signal. It might be time to pre-fill, predict or simplify.

The test for a good AI opportunity, in her view, is simple. Does it solve a real, common problem that users actually care about, and does it do so in a way that removes steps and mental load rather than adding to them?

If the answer is yes, then AI can become a natural part of the experience instead of an awkward attachment.

Active AI and Passive AI

To help frame the landscape, Muriel introduced a distinction between active and passive AI.

Active AI is what most people think of first. You go somewhere, like a chat interface, you type a question, and the system responds. The user initiates and directs the interaction. This can be powerful, but it also demands effort, intent and sometimes a learning curve from the user.

Passive AI is different. It works quietly in the background, predicting what might help next and offering it without fanfare. It pre-fills forms based on previous behaviour, surfaces relevant content at the right moment and removes steps from workflows without asking you to change the way you think. Muriel is especially excited about passive AI because it fits so naturally with how designers already think. Design systems are all about patterns and tokens, reusable structures that encode decisions so that others do not have to make them from scratch every time. Passive AI can be seen as another kind of pattern, one that encodes not just visual or interaction decisions, but behavioural ones.

When done well, passive AI feels less like a feature and more like a product finally living up to its potential.

AI as a Design Practice

Muriel finished by bringing AI back into familiar territory. Good AI, she said, looks a lot like good design.

It starts with research and listening rather than assumptions. It explores multiple options and is comfortable discarding most of them. It launches in small, contained ways before scaling. It always keeps a manual path open so that users never feel trapped. It builds in feedback loops, even something as simple as “Did this help, yes or no”, and then does the hard work of acting on what it hears.

She stressed that AI features should never be shipped and forgotten. They need ongoing monitoring, adjustment and, crucially, documentation. Documentation is not glamorous, but without it, teams repeat mistakes when people leave or priorities change. Underneath all of this is a kind of humility. The willingness to admit that systems can be wrong, that models need guidance and that users are the ultimate judges of value.

For Muriel, the most exciting thing about AI is not its novelty, but its ability to double down on what design has always promised. Helping people do what they need to do, with less friction and more care.

In a world full of lip service AI, her talk felt like a reset. A reminder that the real opportunity is not to do AI for its own sake, but to use it as one more tool in a deeply human craft.

Want to watch the full talk?

You can find it here on UXDX: https://uxdx.com/session/from-lip-service-ai-to-architecting-ai-empowered-experiences-a-customer-centric-ux-framework1/

Or explore all the insights in the UXDX USA 2025 Post Show Report: https://uxdx.com/post-show-report

Rory Madden

Rory Madden

FounderUXDX

I hate "It depends"! Organisations are complex but I believe that if you resort to it depends it means that you haven't explained it properly or you don't understand it. Having run UXDX for over 6 years I am using the knowledge from hundreds of case studies to create the UXDX model - an opinionated, principle-driven model that will help organisations change their ways of working without "It depends".

Get latest articles straight to your inbox

A weekly list of the latest news across Product, UX, Design and Dev.

We care about the protection of your data. Read our Privacy Policy.

You might also like

UXDX is my favourite newsletter. Incredible content across the key areas in our industry.

Dennis Schmidt
Dennis Schmidt
Product Designer, COYO

Subscribe to the UXDX Newsletter

We care about the protection of your data. Read our Privacy Policy.