As AI assistants powered by Large Language Models (such as ChatGPT or Microsoft’s sprawling Copilot offering) become increasingly ubiquitous, I have repeatedly found myself thinking about the growing importance of having a basic understanding of social psychology and common cognitive biases.
Our written words are manifestations of our mental structures and thought patterns. When you consider this, it isn’t surprising that our interactions with AI models trained on billions of those very words suffer from many of the same biases that have traditionally belonged exclusively to the domain of human-to-human communication.
One of the more uncomfortable sets of cognitive biases to recall when working with AI is the so-called coherence heuristic, which has been discussed by psychologist Daniel Kahneman. (Probably others, too.. but I’ve mainly read Mr. Kahneman’s work on the topic thus far, so I’m referencing him here.)
I would describe the coherence heuristic as a set of individual biases, each of which have an effect on the way we ingest and qualify new information. Sounds relevant? It is, especially right now.
Some of its key aspects include:
- The prioritization of cognitive ease
- The illusion of understanding
- Confirmation bias
Next, I’ll try to provide some brief thoughts on these and how they relate to working with LLMs using natural language.
The pursuit of cognitive ease

Oftentimes we might rely on models like GPT-4 to help us understand new concepts – technical or otherwise. To avoid mental effort, we might consciously or unconsciously accept an LLM-powered chatbot’s suggestions without double-checking the context or accuracy of the proposed content. This is especially likely when the subject matter is foreign to us and the bot’s answers are formatted clearly and sound coherent to us.
In other words, we have a tendency to qualify new information on an unfamiliar topic based on whether it looks factual and bears the typical hallmarks of quality information, irrespective of whether it’s really backed up by anything concrete. It’s a mental shortcut.
This is why I think it protects us to acquire a base level of competence in a topic through traditional means before studying it further with the help of LLMs. With more contextual information, AI hallucinations are far easier to spot, challenge and verify.
Even experts in a given field can be vulnerable to over-reliance on AI because of the associated boost to cognitive ease and are liable to get caught up in the magic: Recently, a lawyer got caught submitting imaginary cases in a real-life court filing.
An excerpt:
(…) <the lawyer> said heβd never used ChatGPT before and had no idea it would just invent cases.
In fact, <the lawyer> said he even asked ChatGPT if the cases were real. The chatbot insisted they were.
Forbes.com, “Lawyer Uses ChatGPT In Federal Court And It Goes Horribly Wrong” (May 27, 2023)
The illusion of understanding

When someone provides a simple explanation of a new thing in snack-sized format (with bullet points, statistics and/or cool visuals) we typically rate the information favorably, as I mentioned when discussing cognitive ease. Well-formatted information – even when fundamentally incorrect – can give us a satisfying feeling of being on top of things, which we like.
The thing is, we are prone to overestimating our own level of knowledge when we only have incomplete but very well formatted information on it. This can lead to overconfidence and mistakes when moving to apply our limited knowledge to real world challenges.
I reckon most of my colleagues in IT recognize this one especially well; the more experienced and knowledgeable you get, the less certain you are willing to be about.. well. most things. There’s a reason “it depends” is the eternal mantra of consultants.
Large language models like GPT-4 can indeed provide easily digestible “explain like I’m 5” answers on most topics but those answers might leave out key bits of context that would be relevant generally or specifically to our own scenario.
Confirmation bias

This one is a classic and probably well-known to most of you already. When we receive new information that fits with our existing conceptions and opinions, we subject that information to far less critical observation than we do to information that contradicts our existing understanding or beliefs. We do this to avoid cognitive dissonance, which can feel highly uncomfortable and straining.
In short, we enjoy feeling like we have the right idea about something. Having to reframe our mental model doesn’t initially feel great, so we might subconsciously seek to avoid the negative sensations by resorting to confirmation bias.
Unfortunately, generalized LLM-based chatbots often go out of their way to bend to your will – in a humorous example, they might even agree that 4+9=15 if challenged aggressively enough.
A good, constructive and mature disagreement and respectful debate are conducive to new ideas and insights. Yes, LLM chatbots do push back in many cases, but not with the same vigor, inventiveness and persistence as an another human would.
Unless we realize that these chatbots might not truly challenge our preconceptions even when they have correct information, we are prone to falling into this trap.
The availability heuristic (and some parting thoughts)

Finally, in “Thinking, Fast and Slow” from 2011 (I recommend the book warmly), Mr. Kahneman highlights the availability heuristic – a cognitive shortcut that makes us mostly rely on information, anecdotes and examples that are effortlessly recalled and easy to understand, without always correctly assessing their relevancy to the task at hand.
When working with LLMs, this effect can make us prompt them only with the first things that come to mind without considering if our prompts are actually formatted in a way that utilizes the AI model’s capabilities effectively. This, in turn, can lead to us getting undesirable outcomes and incorrectly judging the value and potency of the tool itself.
As stated before, all of these biases (and other important ones, like the automation bias that causes us to give elevated value to suggestions provided by machines) together with the rapid rise of consumer LLMs make for a perfect storm that should be discussed broadly and deeply earlier rather than later.
β‘ Action points
Take a moment and consider situations in which you could have been affected by cognitive biases when working with AI services like ChatGPT or Bing Chat.
Check out a prompt engineering guide like this one from Microsoft and make some notes on how to effectively work with AI models.
You’ll probably start to notice that there are clear differences compared to human interaction and that LLMs can be thought of as an advanced tool – one with a uniquely intuitive user experience.
As a fun experiment to cap this blog off, I decided to visit ChatGPT to request its definition of the availability heuristic. As expected, it made several points I felt were more or less in line with what Mr. Kahneman wrote about. Then, I asked ChatGPT to challenge its own answer.
It came back with some sage advice. π





One response to “Let’s get uncomfortable: AI and cognitive biases”
[…] get to the root of things, we have to talk about psychology and cognitive bias. I touched on the topic previously in a different context in May. Indeed, whenever you insert a human into a decision making process involving other humans, the […]
LikeLike