Episode 161
3 Skills UX Professionals Need Most In the Age of AI
26 min listen
Episode 159
26 min listen
Listen to the Episode
Episode Summary
Let’s get one thing straight: AI isn’t going to take your UX job. But the way you approach your work might accidentally result in someone else taking your job.
UX professionals who’ll stay in demand as AI tools get faster, smarter, and more integrated aren’t the ones doing the most tasks. They’re the ones driving clarity, decisions, and outcomes. They’re the ones who’ve made the shift from doer to driver.
If you’re still focused on checking boxes and executing tasks faster, you might be missing the whole point. Because being good at Figma or writing perfect usability scripts isn’t the differentiator anymore.
What sets high-value UX professionals apart today is how they think, not just what they do.
Here are the 3 skills that actually matter most in the age of AI:
1. Speed (but not in the way you think)
It’s not about how fast you move, it’s about moving fast and smart. The kind of speed that comes from making better upfront decisions so you avoid backtracking later. It’s about designing your research plan in a way that anticipates friction, not just ticking off steps and hoping AI-generated questions will be “good enough.” Real speed comes from judgment.
2. Quality Thinking
AI can generate ideas. But it can’t evaluate tradeoffs, spot nuance, or decide what actually matters. Thinking critically about what you’re researching, how you’re framing problems, and what outcomes actually drive value? That’s your job. If you blindly delegate your analysis to ChatGPT, don’t be surprised when your insights don’t land, or worse, send your team in the wrong direction.
3. Impact Over Outputs
The future isn’t about who can push pixels faster. It’s about who can guide teams through ambiguity. Your ability to influence, frame conversations, and shape decisions, that’s what makes you irreplaceable. AI might be able to generate a wireframe. But it can’t navigate the politics of getting that wireframe shipped or feature prioritized.
In a world where tools are getting more powerful by the week, your value isn’t your ability to outpace AI. It’s your ability to do what AI can’t: think like a strategist, connect dots across chaos, and lead with clarity.
Create your dream career, and life
- Learn how to advance your UX career in our UX Career Roadmap
- Watch our free masterclass about how to get hired faster in your UX job search
- Stories of how UX and Product people got hired after working with us
Watch
Discussion Questions About The Episode
- How might you start shifting from a doer to a driver in your current role, especially when expectations still feel execution-heavy?
- Where in your UX process are you relying on speed at the expense of strategy—and what would it look like to slow down to think faster?
- In what ways could you rethink your value at work beyond deliverables, what outcomes are you already influencing that you haven’t been naming?
Episode Notes & Links
Episode Transcript
[00:00:00] AI isn’t gonna take your UX job, but if you keep operating like a TaskRabbit instead of a strategic thinker, you may accidentally replace yourself to be valuable and in demand. In the age of UX and ai, you really have to shift your mindset from being a doer to a driver. You don’t get replaced by AI because it’s better than you.
For example, you get replaced because you stop thinking like a driver and you started acting like a doer. Hey, I’m Sarah Doody, a user researcher and product designer with 20 years of experience in 2017. I noticed something a little ironic. UX and product people. Despite being great at designing experiences for other people, often struggle to design their own careers.
That’s why I created Career Strategy Lab [00:01:00] and this podcast to help you navigate your UX job search, growing your current role, and avoid skill and salary plateaus, all in a chill and BS free way. So whether you’re stuck in your job search or wondering what’s next in your UX career, you are in the right place.
I often see this fixation of knowing software, knowing processes, knowing methodology, et cetera. But in my experience, the most valuable people are not the ones that know how to use every single feature in software. They’re the ones who can ask the right questions. They’re the ones who can make sense of total messes, and they’re the ones who really drive clarity in rooms that are frankly full of chaos.
So if you’re worried about the future of UX and ai, or you’re wondering what you can really, truly bring to the table as a candidate in a UX job search right now that [00:02:00] AI cannot, this episode is for you. I’m going to break down the three skills that I believe are more important than ever end user experience.
If you wanna keep delivering value in the age of ai. Now, Scott Galloway has really said it best, and I wanna just quote him. He said, AI isn’t coming for your job, but someone who knows how to use AI is, and I heard that probably two years ago, and it’s really stuck with me because it’s not about fearing ai, it’s about knowing how to evolve as a thinker, not just kind of going through the motions, but delivering.
Results, decisions, clarity and outcomes that AI cannot come close to now in case we haven’t met yet. My name is Sarah Doody. I am a user researcher and product designer with 22 years of experience, and now I also [00:03:00] use my experience in UX and product design. To help you UX and design your own career and get hired or promoted with a five figure salary increase.
So before we get into these three specific skills, I wanna kind of zoom out and hopefully shift your mindset so you can maybe identify which of these you are. So in order to be valuable and in demand in the age of UX and ai. You really have to shift your mindset from being a doer to a driver so you don’t get replaced by AI because it’s better than you, for example.
You get replaced because you stop thinking like a driver and you started acting like a doer. So let me explain here. If I hire a bookkeeper, for example, I obviously want them to know how to use software like QuickBooks. [00:04:00] But what I really want is someone who can also say, Hey, based on your numbers, you’re overspending here.
Here’s a better structure. If you did X, Y, Z, you could save $20,000 a year. That is the person I want. That person is not just doing tasks, they’re thinking ahead. They’re anticipating, they’re bringing judgment, they’re bringing discernment, and they’re connecting the dots so I can make better and faster decisions.
That’s what it means. To be a driver, not a doer. And that’s what teams need from UX and product people right now. They need people who are drivers, especially as AI gets better at generating outputs because your value is not just how well or how fast you can execute, right? It’s how well you can do things like prioritizing, framing.
Influencing all these [00:05:00] things that we know are so important to our success as we deal with stakeholders, clients, product owners, et cetera. So I wanna walk you through three critical skills through the lens of a UX research project. Something maybe you’ve done before. If you’re a researcher. But I think it’ll help us have a more concrete conversation as to the value of these skills and the risks of outsourcing them to ai.
So these three skills that we are going to talk about are the skill of speed, quality, thinking, and impact. So let’s start with speed and. I don’t just wanna focus on speed, because to me there’s a difference between doing something fast and doing something faster and better. The ability to move quickly because you’ve made smart decisions upfront, right?
If you move really fast. But then you hit [00:06:00] a wall and you realize, oh, now I made a ton of mistakes and I need to go back and fix stuff. That’s not really a great position to be in. But if you can move fast and strategic, that means you’re probably less likely. To be making the wrong decisions or actions and as a result, need to do less rework essentially.
So in this example of a user research project, let’s think about what a doer would do versus someone who is more of a driver. So the doer may look at this research project and let’s imagine. They’re at the step of needing to outline the questions for the user research interview and some tasks that we’re gonna have people do.
Um, kind of like usability tasks, right? So the doer may think to themselves. Great. I am just gonna go over to chat JPT I’m gonna type in a prompt, something along the lines of [00:07:00] help me write a user research interview script and usability testing script. I am going to be conducting one hour interviews with.
20 people and I need to understand something, whatever the topic is, right? And then they would hit enter and a couple of seconds later, there’s the script and that doer may think to themselves, awesome, I’m done. Right? Unfortunately, maybe they’re not even going to read the script, which is painful to think that people do that, but it a hundred percent happens.
The doer is going to run into problems down the road because. What could end up happening is, let’s say that interview script has to be reviewed or approved by someone. They may get to that approval point and that person is like, what the hell are these questions? This has nothing to do with our goals.
Like why does, why are we not asking this, that the other, right? That is one possible scenario. A worse scenario is that they conduct the interviews and the [00:08:00] usability testing and then are presenting those research findings, and then it comes up that like. What the heck were we even researching? Like what you’re telling us is not in line with what we were hoping to learn.
Right? So by outsourcing the creation of that discussion guide and interview script to ai, that is a real risk because that doer focused on speed, whereas that driver would be pairing speed with. Strategy. So what the driver would do in this case is they may use AI to help them draft the initial interview and discussion guide, right?
They would also provide more details and context about the company, the goals of the research. Any potential data that obviously from a privacy standpoint, they [00:09:00] can feed into the AI and they would be providing more context. They wouldn’t just be saying, make a discussion guide for a one hour interview.
Enter, right. That is the difference between a doer and a driver. The, the driver may also not even use AI for the first draft. They may come up with the first draft themself and then have AI help them make that draft better. Maybe phrasing of questions, et cetera. The driver also cares about speed, but they’re not gonna sacrifice.
Speed for poor quality because they were not pairing speed with strategy. So let’s think about the second skill, which is the skill of quality thinking. Now when it comes to research. AI could be great for summarizing, right, but the problem is it doesn’t have great knowledge or instinct for [00:10:00] nuance, for connecting the dots for even understanding.
The nonverbal things that happened in that research. Right. Especially if the research was conducted in person and you were sitting there and you could hear the person struggling with their mouse and they’re like rage clicking. Maybe the AI may not know that, but since you were there sitting beside them, that is an example of the nuance that could be missed if we just feed the transcripts.
Of these interviews into AI in order for it to synthesize the research findings, right? So let’s say we’ve done the research, we have 10, 20 interviews, whatever it is, the doer would be focused on, let’s get this done fast, right? What they’re gonna run into is a quality problem because they were not also pairing speed with [00:11:00] quality thinking.
And so they may end up with an initial list of problems, like onboarding is confusing, people want more control. Notifications are excessive, or stuff like that, right? But what’s missing, oftentimes what we’re gonna find is what’s missing is the context behind that, the why, right? So like onboarding is confusing.
Okay, but why? Is it because of the interface where things are located on the screen? Is it because of the order of the onboarding? Because something happened in the screen prior that confused people or something was missing in the screen prior? The confused people. Is it because of literal words on the interface, on the buttons, on, you know, the filters or whatever it is, and I struggle to imagine a scenario where AI is going to have enough context [00:12:00] to piece that all together, right?
That is the risk for that person who is more of a doer. Whereas that person who is a driver, they may too give those transcripts to AI and they may see that list and see that, you know, onboarding is confusing, but because they’re also going to use their brain, they’re able to then infer and create additional connections or connecting these dots.
Remembering that, yes, for eight out of those 15 people on step two of the onboarding, there definitely was confusion. And after watching that many people go through it, it’s pretty clear that the order of the onboarding combined with the interface is tripping people up. Right. But would AI be able to, to identify that?[00:13:00]
I don’t think so. Also, another thing, this is really important. Oftentimes when I have done research, I am leveraging what I learned in that specific set of interviews, and I am also drawing on the hundreds of hours, thousands of hours of research that I have done with people for different products, even in different industries because onboarding.
For a financial product versus a fitness product versus a travel product. Obviously it is different and we know there are probably a lot of similar patterns. Behaviors, et cetera, user flows that could apply to onboarding regardless of the literal product. Right? And that is another benefit to someone who really flexes this skill of being a critical in quality thinker [00:14:00] and not just using AI to generate reports or findings, for example, but.
Combining what they learned in those interviews with what they know as a professional and what they’ve seen in previous research projects. That is the difference between a doer and a driver when it comes to this skill of quality thinking a doer. Accepts the output at face value, unfortunately, but a driver, they’re gonna read between the lines.
They’re gonna triangulate across research sessions across something that they did two years ago. Who knows? They’re gonna see deeper patterns and be able to filter them through context that AI probably doesn’t have. Quality thinking is what keeps insights from being ignored. Alright, the last skill that I wanna talk about is impact, which I guess you could say like isn’t really a skill, but I’m just gonna roll with [00:15:00] it.
So with our research example, you can run a great research study, you can do awesome interviews, you can move fast. Ask questions, gather insights, but if those insights don’t lead to action, then like what is the point? Right? What’s the point of the research? Your research impact is about what happens after that research leaves your hands.
And in today’s AI heavy environment where tools can supposedly generate decks on the fly and in seconds, what really matters is how well your work. Drives decisions. So let’s go with an example. Let’s say you finish that research project, you have analyzed all the interviews, et cetera, and, and then you use some AI tool to create the final research deck or presentation, right?
And you [00:16:00] present this and everyone nods and listens and seems like they’re excited or whatever. But then weeks, months later, nothing happens. Why is that? It’s because information does not always equal influence. Right? And someone who is a more of a doer is going to be more likely to just be the person that provides information, but doesn’t.
Drive influence, whereas a driver is more likely to drive influence because they are connecting the dots between the user, the product, the business, the market, whatever it is, and they’re able to drive influence because they didn’t just deliver a nice looking research presentation, they took their knowledge of.
The business, the product, the team, [00:17:00] the the time and money available to this product team to to help suggest and prioritize actionable changes that were actually within reach of this team, right, because. If a doer just uses AI to generate a research report, chances are a lot of stuff in that report is totally unattainable, attainable for that team’s capacity, whether it’s time, money, whatever.
Whereas that driver would look at the team’s. Capacity and capabilities and try and offer suggestions that they could actually execute, right? And maybe even prioritize the suggestions. Like when I do research reports, especially UX audits that I do from time to time, at the end of the audit, I break down the suggestions and I tell the, the person or the company or the team, I say, look, here [00:18:00] are the things, however many there are.
Here are the things that you should do right now. These are gonna give you the most bang for your buck in terms of impact to your product, right? Then I say to them, after you do those things. Here are the other things you could do in this order and here is why you should do them in that order. And I kind of tell you how often people tell me how much they appreciated that simple act of helping them prioritize the actions they could take.
Especially noting how much it was noting how useful it was. For someone like me to also consider their capacity when I was making these suggestions, and I have to think that AI is not capable of that because that doer is probably not baking that level of detail and [00:19:00] context. Into that prompt to create the research report, right?
In the case of this little fictional project we’re talking about, impact really means thinking beyond that research task or research report and thinking into the product, the business, the stakeholders, the decision making environment, the time, the money, the energy they have available, right? The driver is thinking about not just reporting, but translating information and insights that hopefully provoke action because that’s what separates someone who is just.
Practicing UX versus someone who is actually a strategic UX partner. Now, before we wrap up, I wanna touch on another topic that I really don’t think people are talking much about, and that is the topic of de-skilling as it relates to ai. And I wanna get out of user experience for a second [00:20:00] and talk about aviation because.
I first learned about this concept of de-skilling when it comes to aviation. I have this talk I do, I’ve been doing it for, gosh, over 10 years. I will link it in the show notes. It is all about the idea of anticipatory design and automation, and I’ve updated it recently to also talk about ai. And in the talk, one of the things that I cover is this example where there was a flight, an Air France flight going from Brazil to Paris.
It left Brazil at night. It got a couple of hours away. I the weather. Started to turn, and what happened was, on the exterior of these planes, there are these sensors that got covered in kind of ice. And that was a problem because it caused all these alarms to go off in the cockpit, [00:21:00] and it also caused the autopilot to stop working.
Therefore, the pilots needed to start flying the plane, but in the panic of the moment. The pilots actually did the opposite of what they were supposed to do in order to course correct, and as a result, the plane went down at a really, really fast speed and ultimately crashed. And after the French equivalent of the NTSB did their investigation, what they found was.
The crash happened because of de-skilling in that when the systems failed and those pilots had to step in, their instincts were slower, and that caused not only this accident, but many other accidents. Have been attributed to this idea of [00:22:00] de-skilling, and I just have to imagine that this idea of de-skilling is gonna become a big deal for us because if we keep outsourcing our skill, our thinking, et cetera, what is going to happen, right?
That’s why I started this episode saying that I really think it’s important that we exercise these muscles and really think of these skills. Of speed, of quality thinking and of impact as muscles. Because if we don’t use them and we outsource it, them to ai, our boss, our colleagues, our stakeholders, our clients are going to notice.
Because the output of AI is not always good. In fact, it’s sometimes obviously bad, or frankly, a hundred percent wrong, and the same thing is really happening in ux. I think the people that are most at risk are the ones. [00:23:00] Who are going to be suffering from de-skilling because they are not exercising these skills and really working these muscles of focusing on speed combined with strategy, quality thinking, and not just impact, but impact that leads to action and outcomes.
And like just side note. This is one of the reasons why I am so anti-people using AI to literally write their resume or write their case studies. Is it a good tool to help you refine them? Yes, but anyone that is telling you that they have some AI tool that can just create your resume from scratch. I don’t believe it.
It’s a major red flag. The other thing I would caution you with is that the very act of thinking about and then writing about [00:24:00] your work history in your resume or in your portfolio. It is also helping you do better when you get to the point of needing to talk about that on a screener call, in an interview, et cetera.
Because if you just have AI make your resume or make your portfolio, that means you haven’t really thought about it, right? And for some of you, you maybe didn’t even read what it generated in the first place. What does that mean? It means that when you get to those interviews. You’re not gonna have a clue what to talk about.
You’re gonna fumble over your words. Whatever you say is not gonna make sense ’cause you didn’t think about it ’cause you outsourced that to ai. And over time what happens if you do this over and over? De-skilling, right? It’s atrophy. So I want you to think about are you on the path of being a doer or are you on the path of being a driver when it comes to [00:25:00] using AI in your job?
If you’re on the doer path, it is not too late because if you don’t get off this doer path, you’re gonna end up in this world of de-skilling, which means you are going to become obsolete. If you focus on being a driver who is using AI as a companion and not just something to do your job, you’re gonna be just fine.
Because in the age of ai, your job isn’t necessarily to generate more. It’s to generate better, it’s to generate meaning, right? That’s what’s gonna make you valuable. That’s what’s gonna keep you sharp, and that’s what’s going to make you irreplaceable. Alright. I hope you’re thinking a little differently about AI in your career, maybe even your job search than you were before you listened to this episode.
And hey, I do have one favor. I am trying to get to. A [00:26:00] hundred star ratings and 50 reviews or comments on this podcast. So can you do me a favor? It’s only gonna take less than one minute. Hit pause right now. Go and give me a star rating, and then if you’re an Apple Podcast, write a review. It doesn’t have to be long, it could just be one sentence, but.
These two actions of leaving a star rating and writing a review help signal to the podcast algorithms that you found this helpful. And then it tells the algorithms to suggest this podcast to other UX and product. People, and that also helps me keep this podcast and other things I do free. So if you enjoy this free podcast and you wanna keep it free, do me a favor and leave a star rating and or a review.
All right, that’s all. I’ll see you in another episode.
