Psychologists Amos Tversky and Daniel Kahneman demonstrated that people often rely on fast, intuitive rules of thumb—known as heuristics—and these can lead to predictable biases in judgment. Their classic work launched a whole field and still guides how we make design choices, labels, prices, and flows. If you’ve heard of anchoring, framing, or loss aversion, that’s them.
A simple, working definition you can tell your team: a cognitive bias is a systematic tilt in how we notice, interpret, or remember information, often triggered by context and the way options are presented. As the Interaction Design Foundation puts it, the framing of information can steer judgment away from purely “rational” analysis—and toward what feels right in the moment.
Display a high “compare at” price, and the sale looks irresistible; start a form with a long field, and everything feels tedious. Tversky & Kahneman’s famous experiment even used a fake wheel of fortune: people’s estimates drifted toward the random number they saw first. Design move: set anchors intentionally. In pricing, use fair reference prices; in UI, lead with a concise, straightforward first step so the flow feels intuitive.
“Pay a ₹50 fee” vs. “Save ₹50 when you pay now.” Same math, different decisions. Framing shapes risk and preference; the 1981 paper demonstrated that people’s choices fluctuate depending on the use of “gain” versus “loss” wording. Design move: frame copy around user value and clarity, not fear. A/B test neutral vs. positive frames, especially around payments and permissions.
Prospect Theory finds we’re more sensitive to losses than equivalent gains. That’s why removing a feature can spark more anger than adding the same amount of value sparks joy. Design move: avoid surprise losses. If you must remove or change something, explain the why, offer alternatives, and show a clear upside.
Defaults are powerful. Meta-analyses and notable organ-donation studies demonstrate that opt-out defaults significantly impact participation. In products, a sensible default (“Last 7 days”) removes a decision and speeds the first action. Design move: set helpful, reversible defaults and explain them (“Recommended for most users”). Use them to support user goals—not to trap people.
Teams (and users) seek evidence that proves their hunches. In research, this means asking leading questions or overemphasizing a quote that supports the PM’s theory. Design move: write neutral tasks, include disconfirming probes, and pre-register what success looks like before you see the data. Even a basic definition from trusted references stresses this tendency to favor confirming information.
A quick reminder to your class: biases are not just inherent in us, too—researchers, designers, and stakeholders alike. Good process protects us from ourselves.
This respects the spirit of the research: predictable bias → testable design hypothesis → observable change.
Bring us your “people keep hesitating here” moments. We’ll turn them into kind, confident decisions—without crossing the line.
Q1. Are biases always bad?
No. Biases are energy-saving shortcuts. Problems arise when context triggers the wrong shortcut. Our job is to design a context that guides better choices.
Q2. Is “defaulting to annual billing” unethical?
It depends on transparency and reversibility. Evidence says defaults steer behavior; use them only when the default truly fits most users and is easy to change.
Q3. How do I avoid confirmation bias in my research?
Pre-define success metrics, write neutral tasks, seek disconfirming evidence, and include a second reviewer for notes. Even the definition from APA highlights our tendency to hunt for confirming data—guard against it.