Date: 2017-now
Context: In 2017,  I presented a keynote at IXDA Florianópolis that proposed something most people weren't talking about yet: what if digital products could adapt to you the way a good teacher, coach, or friend does? Not just remember your preferences, but actually notice your struggles, recognize your patterns, and adjust in real time to help you grow.
I called this vision Adaptive UX: systems that learn and change over time to fit a user's specific and situational desires, skills, and goals. Experiences that feel genuinely caring, outcome-oriented, and dynamically tailored to people's changing contexts.
At the time, it was largely theoretical. Today, with large language models, it's becoming remarkably practical.

Presenting the concept of Adaptive UX in 2017 to an audience of 1500 people.

From Personas to Behavior in Context
The thinking behind Adaptive UX came from a buildup of projects: adaptive literacy education with the Lemann Foundation, food hyper-personalization experiments, a dynamic Hulu mobile interface, and my master's thesis exploring how technology shapes behavior. Across all of them, I kept hitting the same realization: personalization is a design problem, not just an algorithmic one.
The focus shifts from "who you are as a segment" to "how you are right now" across different emotional states, contexts, and levels of expertise.
An Experience of Care
To be accepted as an ally, a system needs to care about you and the things you care about. The question of authentic care emerged from watching how traditional personalization froze people into static profiles and optimized for clicks or purchases, not human growth.
Adaptive UX reframes personalization as an ongoing empathy and diagnostic cycle: 
• Track behavior (including errors).
• Connect with current and past context.
• Infer hypotheses and create a model about the user's intentions, emotional state, and level of proficiency. 
• Continuously adjust content, pacing, and interaction to support their goals.
When I talk about empathy here, I mean cognitive empathy—understanding what someone is trying to do, how they see the world, and what they might be feeling, then using that understanding to help them progress. The design challenge becomes making these "empathy models" legible, evolvable, and trustworthy for both users and the service.
Create a Model of the User's Current Mental Model
To be relevant, you need to empathize with how the person you're trying to help thinks and feels. This means creating a model that captures their mental model for understanding and engaging with a particular activity.
I never got as concrete with this concept as in my work with the Lemann Foundation on teaching literacy in Brazilian public schools. We faced the challenge of turning static, one-size-fits-all education materials into a platform that dynamically adjusts in real time to individuals and supports learning goals.

The core idea: ask a dynamic set of spelling exercises, and treat errors as windows into a kid's understanding of the written system at that moment. Use those signals to create targeted exercises that would promote a conflict with their incorrect or incomplete understanding, triggering a redefinition that requires them to incorporate new knowledge and evolve their mental model.

Not too different from how scientific theories evolve.

This solution was grounded in observing teachers with 25 years of experience and manually encoding their tacit knowledge into software. We taught the system to interpret unconventional spellings as evidence of specific learning hypotheses—like a child believing "big animals should have big names"—and mapped the different ways those teachers personalized their tutoring based on that evidence. For example, asking the child to spell small names of big animals, and large names of small animals.

Asking the child to spell small names of big animals and large names of small animals challenges an incomplete mental model.

Design Tensions
For Adaptive UX to achieve its fundamental proposition of efficiency and care, a new set of questions and tensions needs to be addressed.
What's the balance between pleasing (as in sycophancy) and challenging (helping you learn and grow)? How do we balance dynamicism with predictability? Confidence and proactivity with openness and mindful restraint? How can the interaction build on a user's strengths while also providing extra support for their less developed skills?
A designer needs to make decisions on several tensions:
Autonomy vs. Guidance. How much should the system intervene, and how much should it step back? A writing tool that auto-corrects every error might help efficiency, but erode a user's sense of authorship. When does helpful assistance become infantilizing?
Exploration vs. Exploitation. Should the system keep serving you what works or occasionally introduce unfamiliar options to expand your horizons? A music app that only plays what you've liked before creates comfort but also a filter bubble.
Transparency vs. Seamlessness. Do users want to see how the system is adapting to them, or should it "just work"? Showing your reasoning builds trust but can also create cognitive overhead. When does explanation become interruption?
Consistency vs. Contextualization. How do we maintain familiar patterns users rely on while still adapting to context? If your interface completely reorganizes based on your current task, you might never develop muscle memory or spatial reasoning about where things are.
Individual Optimization vs. Collective Learning. Should the system optimize purely for you, or incorporate wisdom from similar users? Sometimes what feels right in the moment isn't what helps long-term growth—a fitness app that only suggests easy workouts won't help you progress.
Implicit vs. Explicit Adaptation. Should systems infer needs from behavior alone, or ask users directly what they want? Inference feels magical but can be wrong; asking feels clunky but respects agency. When do you observe silently versus probe actively?
Short-term Satisfaction vs. Long-term Growth. The literacy project exemplifies this—sometimes the most caring thing is to create productive struggle rather than remove all friction. How do we design for the person someone is becoming, not just who they are right now?
While systems adapt automatically, users should maintain control over their experience, with options to adjust preferences or disable "smart" features. And hyper-personalization raises real risks around privacy, manipulation, and bias. 

Some principles of Adaptive UX.

We create mental models of what intelligence is, which often lead to expectations that are not fulfilled by what a machine can do. 
Design can reduce that gap by clearly communicating the right expectations and enhancing the experience's value. Al does not need to be super-intelligent to be useful. It can achieve great results by properly collaborating with people.

Making It Concrete: Text Editors as Adaptive Partners
To ground this concept, I walked through the evolution of text editors in the talk. A transition from command-line tools to WYSIWYG interfaces, to grammar assistants, to speculative "co-authors" that understand writing style, confidence, and intent.
The interface is reframed as a collaborative partner that observes, hypothesizes, and adapts, instead of a fixed tool waiting for commands. Designers become the architects of these collaborations, shaping how future systems sense, interpret, and respond to human experience at scale.

A collaborative editor starts by diagnosing what you want to write, the style, your proficiency level, and how engaged you are with the task. For example, analyzing how you start the text, the phrases you write, how decisive you are in writing, and even the inspiration you gathered before starting to write. And validating with transparency: Did I get this right?

Adaptive UX in Practice Today
(2025 update) What was largely theoretical in 2017 has become practical with large language models. Modern LLMs can maintain rich, turn-by-turn user profiles, infer intentions from natural language, and continuously update their hypotheses. Rather than hard-coding expert rules the way we did with the literacy project, designers can now teach LLMs what cognitive empathy looks like by providing examples of helpful, context-aware responses and letting the model generalize across contexts.
Some good examples of dynamic interfaces are emerging. Most are not yet fulfilling the Adaptive UX vision, especially in the caring aspect, but they are advancing the field.
For example, ​​​​​​​Duolingo has built a proprietary AI system called "BirdBrain" that analyzes performance, mistake patterns, response times, and retention likelihoods to adjust lesson difficulty, spacing, and exercise types on the fly. This mirrors the adaptive learning principles from my literacy project, but at scale and with far less manual rule-crafting.
The landscape has also shifted from scripted chatbots to embedded AI copilots that observe workflows, surface knowledge at the right moment, and negotiate the division of labor between human judgment and machine speed.
The Design Challenge Ahead
The original question remains: how do we create empathetic machines? The answer now includes LLMs, but the central design challenge persists—managing expectations, making mental models legible, and ensuring people can steer their own profiles.
Adaptive UX can potentially become the default mode for designing intelligent interfaces that respect human agency while scaling empathy across billions of interactions.

It is the design of a relationship between AI and people, grounded on empathy, trust, and collaboration.  

The full keynote presentation, 52 min.

You may also like

Back to Top