What Every UX Researcher Should Know About Large Language Models (LLMs)
- Philip Burgess

- Aug 21
- 3 min read
Updated: Oct 25
Philip Burgess - UX Research Leader
Large Language Models (LLMs) like ChatGPT, Claude, and Gemini are transforming the digital landscape—and UX research is no exception. As these tools become more embedded in our workflows and the products we help shape, every UX researcher needs to understand what they are, how they work, and what they mean for our craft.
This isn’t just about learning a new tool. It’s about understanding the new terrain of human-computer interaction.
What Are LLMs, Really?
At their core, LLMs are AI systems trained on vast amounts of text data. They learn patterns in language—grammar, tone, logic, flow—and use probability to predict what word (or phrase) should come next in a sentence.
But they don’t “think,” “know,” or “understand” like humans. They generate plausible-sounding responses based on patterns—not truth. That distinction matters when using them in research.

LLMs in the UX Research Toolkit
Here’s how LLMs are being used by researchers today:
Task | How LLMs Help |
Survey Analysis | Summarizing thousands of open-text responses |
Interview Synthesis | Drafting themes from transcripts or notes |
Prompt Generation | Writing better screener or discussion guide prompts |
Competitive Review | Extracting patterns from app store or review data |
Storytelling | Drafting insights slides or executive summaries |
Accessibility | Translating or simplifying content for inclusivity |
LLMs can supercharge your workflow—but only if you understand when (and how) to trust them.
5 Things Every UX Researcher Must Understand About LLMs
1. LLMs Don’t Know Facts
They don’t “remember” training data the way we remember a conversation. Instead, they mimic patterns. So when you ask, “What were users’ biggest pain points?”—they may fabricate something plausible but untrue.
Always validate AI-generated findings against your raw data.
2. Prompt Crafting Is a Skill
A generic prompt like “summarize this interview” won’t give you rich insights. A better prompt would be:
“Summarize this interview by identifying 3 user goals, 3 friction points, and one unexpected insight. Use direct quotes when possible.”
UX researchers must become prompt designers.
3. Bias In, Bias Out
LLMs are trained on human-created data, which means they inherit systemic biases—gender, race, culture, and more. They may overlook nuance, especially in underrepresented voices.
Always apply a critical lens and involve diverse reviewers.
4. They Lack Contextual Awareness
LLMs don’t remember what came before unless you tell them. They can’t understand product goals, user segments, or business context unless you embed it in your prompt.
You provide the context—LLMs only reflect it.
5. They're Better for Acceleration, Not Accuracy
Think of LLMs as a jumpstart. They can give you a first draft, spark ideas, or handle repetitive synthesis. But they can’t replace deep human insight, emotional nuance, or strategic judgment.
LLMs save time; they don’t replace thinking.
Where UX Researchers Can Add Value
With LLMs taking on more of the grunt work, researchers are free to lean into what machines can’t replicate:
Deep empathy through live conversations
Contextualized insight that drives design
Ethical reflection on participant inclusion and impact
Strategic storytelling that influences at the leadership level
Facilitating cross-functional understanding and alignment
LLMs aren’t a threat—they’re a lever. The more we understand them, the more powerful we become.
Final Thought
The best UX researchers of tomorrow won’t be the ones who resist AI—they’ll be the ones who wield it responsibly.
Large Language Models are not oracles. But in the hands of thoughtful, human-centered researchers, they can become incredible tools for speed, scale, and exploration.
Philip Burgess | philipburgess.net | phil@philipburgess.net



Comments