The Gravity of Bias: How AI Can Distort Our Perceptions
- Jeff Hulett

- Oct 10
- 4 min read
Updated: Oct 15

First off, you need to get over it. Please repeat after me:
"Data is biased. Data is always biased."
Don't you feel better now?! In our messy, real day-to-day lives, data suffers from the "3 Nevers" of the knowledge problem. Knowledge, and therefore data, is:
Never Complete,
Never Static, and
Never Centralized.
(Click here to learn more about the 3 Nevers.)
Now that you know this, it's time to discuss 1) how bias is presented in the GenAI world and 2) what we do about bias and the 3 Nevers.
Our brains and psychology are hardwired to do two fundamental things: seek immediate cause and effect (the "why") and hunt for human heroes or villains (the "who"). These two programs run constantly in our subconscious, like hidden survival software.
This default programming is great for spotting a predator or seeking the protection of the tribe. But in the age of data and Generative AI, it may be a dangerous flaw. We naturally elevate people—not gravity or circumstance—as the cause of success or failure. When in hero mode, our cause-seeking feeds the mantra: If they can do it, I can too!
When coupled with AI's predictive power and biased data, this innate tendency leads to severe distortion. Because we spend more time in everyday life analyzing data than running from danger, understanding and knowing when and how to override this default programming is now our most crucial intellectual defense.
About the author: Jeff Hulett leads Personal Finance Reimagined, a decision-making and financial education platform. He teaches personal finance at James Madison University and provides personal finance seminars. Check out his book -- Making Choices, Making Money: Your Guide to Making Confident Financial Decisions.
Jeff is a career banker, data scientist, behavioral economist, and choice architect. Jeff has held banking and consulting leadership roles at Wells Fargo, Citibank, KPMG, and IBM.
Neurobiologically, our brains are constantly constructing narratives to make sense of the world. The prefrontal cortex, a key region for executive functions, works to establish causal links, often driven by a dopamine-fueled reward system when a coherent explanation is formed (Schultz, 1998). This strong drive to understand why makes us particularly vulnerable to misinterpreting information, especially when presented through a biased lens.
Consider this three-frame graphic for illustrating this phenomenon:
Who is the cause?
The lead image has three related frames. In the initial scenario, we see a pickup truck on a flat surface with individuals seemingly pushing it. Our immediate, ingrained "why-seeking" mechanism might lead us to attribute the truck's movement to the efforts of these individuals. In the first frame, it may seem silly to think the dunce on the bed's pushing effort matters, but they are pushing, and this pushing is "correlated" with the movement of the truck. At this point, it seems the person pushing behind the pickup is the true cause of the movement. But are they?!

What is the cause?
The second frame introduces the critical element of perspective. Now, our perspective changes, and we realize the pickup truck is on a hill. Therefore, what "causes" the pickup truck to move? Is it 1) the dunce on the pickup, the person behind the pickup, or 2) gravity? The answer: Gravity. This highlights how a dominant, albeit incorrect, interpretation can override the actual physical laws at play.

Distorting the cause
Finally, the third frame reveals the profound distortion possible through manipulated perspective. Notice how the pickup truck data can be distorted by the perspective of those owning the data. The camera is turned 45 degrees to make the pickup appear like it is on flat ground. Yikes! By rotating the camera angle, a metaphor for data manipulation, the visual evidence is skewed. This makes the inclined truck appear to be on flat ground. This subtle manipulation can reinforce a false perception, further cementing an inaccurate causal link in our minds. Also, we are naturally biased to want to believe people cause success, not the environment. This is also known as the hero fallacy. This tendency is a classic example of the fundamental attribution error, a cognitive bias where we overemphasize personal traits and underemphasize situational factors when explaining others' behavior (Ross, 1977).

AI makes it easier to get predictions wrong. Errors of omission and confirmation bias are the juice keeping bias flowing. When AI models are trained on biased data or designed with specific objectives, they can inadvertently, or even intentionally, reinforce these cognitive pitfalls (O'Neil, 2016). Our innate desire for causal explanations, coupled with AI-driven recommendations based on skewed data and feeding our hero fallacy, can create a powerful feedback loop that amplifies errors of omission and confirmation bias.
To mitigate these risks, a balanced approach to AI is essential. While AI offers powerful tools for data analysis and prediction, human oversight remains paramount. Frameworks such as those advocated by Personal Finance Reimagined (PFR) in our book, Making Choices, Making Money, emphasize structured decision-making processes. These processes encourage individuals to evaluate information critically, consider multiple perspectives, and move beyond superficial causal links, distinguishing between human intuition and AI augmentation. This integrated approach, which incorporates decision frameworks, technology, and education, empowers individuals to harness the utility of AI while remaining vigilant against its potential for perceptual distortion.
Further, PFR provides prompt engineering suggestions to manage your GenAI experience in a way reducing bias potential. PFR shows you how to make GenAI a thinking partner by taming our natural bias tendencies.
References
Hulett, Jeff. “When Maps Melt: Why Probability Is Not Frequency.” The Curiosity Vine, 2025.
Explains why reliance on historical data (frequency) alone is flawed for future predictions in human affairs, detailing the three core informational limits (Knowledge is Never Complete, Static, or Centralized) that necessitate probabilistic reasoning.
O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. Summarizes how algorithms can perpetuate and exacerbate existing societal biases, impacting everything from credit scores to criminal justice.
Schultz, Wolfram. “Predictive Reward Signal of Dopamine Neurons.” Journal of Neurophysiology, 80(1), 1998, 1-27. Explains the role of dopamine neurons in signaling reward predictions and learning, illustrating the neurobiological basis for our pursuit of understanding and positive outcomes.
Ross, Lee. “The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process.” Advances in Experimental Social Psychology, 10, 1977, 173-220. Highlights the fundamental attribution error, demonstrating our systematic tendency to attribute others' actions, including success, to stable internal characteristics rather than external circumstances.


Comments