Embrace the Power of Changing Your Mind: Think Like a Bayesian to Make Better Decisions, Part 1
- Jeff Hulett
- Sep 24
- 15 min read

Introduction: Why Changing Our Minds Matters
Changing your mind is not weakness; it is disciplined strength. We live in a world that rewards certainty, slogans, and quick takes. Yet the choices that shape our careers, finances, health, and relationships usually unfold under uncertainty. The risk is obvious: when we cling to yesterday’s belief in a shifting world, our judgment drifts away from reality. The result is avoidable regret—missed opportunities, inefficient effort, and decisions we cannot defend when the dust settles.
Psychologists call the stickiness of old beliefs confirmation bias and status-quo bias. Economists observe similar inertia in markets and organizations. Neuroscience adds a critical piece: the brain runs on prediction. When reality surprises us, dopamine spikes signal a prediction error. That signal is a built-in update invitation. Culture and ego often reject it. We rationalize away the signal or drown it out with social proof. A better path is to make those error signals actionable through a repeatable process for belief revision.
This article argues for one simple, powerful habit: treat every important belief as a working hypothesis, not a permanent identity. Test it, update it, and move. The process that keeps those updates honest is Bayesian inference. Rev. Thomas Bayes did not write a self-help manual. He left us a method for combining prior knowledge with new evidence to form a more accurate belief. That method scales from umbrellas and medical tests to job changes and investment policy.
The point is not to turn life into equations. The point is to install a lightweight belief-updating loop that runs in the background of daily judgment. When the loop runs, you respond to evidence instead of mood, noise, or pressure. You reduce overreaction to one-off events. You raise your signal-to-noise ratio. You get to be wrong early, not catastrophically late.
This installment lays the foundation. It introduces a words-first version of Bayes’s rule, shows how the pieces interact, then warms up with two examples: an everyday umbrella decision and a gentle medical-testing case. Later installments add classic puzzles and a full career case study centered on Mia’s job decision, plus a practical playbook for turning updates into action.
Article Series
Over the course of four installments, we will move from foundations to practice, showing how you can strengthen your ability to update beliefs, make confident choices, and better navigate uncertainty.
Part 1: Foundations of Bayesian Thinking Introduces the belief-updating framework, intuitive examples, and the basic structure of Bayesian inference.
Part 2: From Intuition to Application Builds from simple examples (like weather forecasts and medical tests) to classic Bayesian puzzles that reveal common errors in reasoning.
Part 3: Bayesian Inference in Real Life Applies the framework to real-world decisions, with a focus on Mia’s job-change example and how structured updating supports free will.
Parts 4 & 5: Tools and Exercises for Becoming Bayesian Provides advanced discussion, practical exercises, and a workbook-style appendix with prompts (including “Ask GenAI”) to help you apply Bayesian updating to your own choices.
This journey is about more than math. It is about learning how to embrace uncertainty, change your mind when the evidence calls for it, and make better decisions that serve your long-term goals.
The Belief Updating Framework
Before we dive into examples, let’s clarify the two core ingredients of Bayesian thinking:
EB (Existing Belief): This is the position you already hold about the world, shaped by your experiences, culture, or intuition. Think of a belief as a mental icon or app: it holds a pre-defined set of instructions that will run automatically, without conscious deliberation. In this way, beliefs behave much like habits. They serve as mental “easy buttons” that streamline cognition, saving time and energy. The challenge is that our world is dynamic and ever-changing. A belief that once served you well can become outdated. Anything outdated that continues to run automatically becomes risky—leading you to make poor decisions without realizing it. This is why belief updating is so critical.
NE (New Evidence): This is fresh information that challenges, supports, or reshapes your existing belief. It can arrive as an observation, a data point, or even a friend’s comment that causes you to reconsider.
When new evidence shows up, we need a disciplined way to test how it affects our existing belief. That’s where the Belief Updating Framework comes in. We will use it throughout this article series to illustrate how Bayesian inference works in practice:
A = Prior belief (P(EB)). Your existing belief and its strength.
B = Likelihood (P(NE | EB)). If your belief were true, how consistent would this new evidence be?
C = Baseline Evidence (P(NE)). How common is this evidence across the whole world of possibilities, not only under your favored belief?
D = Posterior belief (P(EB | NE)). Your revised belief after weighing the prior, the likelihood, and the baseline.
In words:
Updated Belief = (Existing Belief × Likelihood of Evidence) ÷ Baseline Evidence.
In compact math:

Why these parts?
Each checks a different failure mode:
A (Prior) counters recency bias by honoring what you already know, while keeping it explicit and adjustable.
B (Likelihood) asks a directional question: If my belief were true, would this kind of evidence be expected or surprising?
C (Baseline) prevents overreaction to evidence that is common no matter which belief is true. It is the anchor against sensational anecdotes.
D (Posterior) is the commitment device—an updated confidence that should drive behavior.
The Essence of Belief Updating
The most important lesson about the Belief Updating Framework is that it does not need to be perfect. In fact, it is designed to encourage good over perfect. The act of moving through the framework provides most of the accuracy. The precision of each component—your estimates of the prior, the likelihood, and the baseline—is far less important than the discipline of using the structure.
If you can get each estimate “good enough,” the framework itself will produce a decision that is more accurate than your intuition alone. The real power lies in making decisions, testing beliefs over time, and allowing yourself to change course when the evidence points elsewhere. Updating is not a flaw. It is life itself. Beliefs are meant to evolve, and the framework provides a systematic way to keep them aligned with reality. When change is in order, you step down a new road with a refreshed belief.
Gradually, Then Suddenly
Belief change rarely feels linear. It often unfolds like a Hemingway character’s fortune. In The Sun Also Rises, a character is asked how he went bankrupt. The reply is simple: “Two ways. Gradually, then suddenly.” Our beliefs behave the same way. We live with a prior for months or years, nudged by small pieces of evidence. The shifts are minor—almost invisible—until new evidence accumulates to a tipping point. What once felt unshakable flips quickly. The Belief Updating Framework lets you track both phases: the gradual drift as probabilities nudge you along, and the sudden shift when the evidence bucket finally tips. Follow the process—prior, likelihood, baseline—and the math works in your favor. Updates accumulate. When the “suddenly” moment arrives, you are prepared, not surprised.
Words Before Math
The essential discipline is to explain A, B, and C in plain language before touching a number. If you jump straight to arithmetic, you risk smuggling in assumptions that were never examined. Words surface the logic:
Is the prior clear?
Does the evidence actually fit the belief?
How common is this evidence in the wider world?
Only after you have articulated these questions should you assign a rough percentage or even a simple category like low, medium, or high. Numbers then refine what words have already been clarified.
Where Neuroscience and Economics Meet Bayes
The framework also reflects how both the brain and markets operate. Neuroscience tells us that the dopamine-based prediction-error system rewards accurate updates. Each time you revise a belief closer to the truth, you strengthen neural pathways that reduce future surprise. In economics, this is equivalent to lowering the transaction costs of making good decisions.
You also gain from comparative advantage. Let tools and calculators handle the arithmetic, data gathering, and record-keeping. Your role is to frame the question, interpret the context, and weigh values, goals, and risk trade-offs. AI fits naturally into this system—it provides structure and computational assistance, while human judgment supplies meaning and direction.
This is the balance of Bayesian thinking: a framework that prizes discipline over precision, clarity over perfection, and adaptability over rigidity.
Common pitfalls Bayes solves
Base-rate neglect: ignoring how common the evidence is overall (C). Bayes forces you to compute or at least estimate C.
Story dominance: a vivid anecdote crowds out quieter base rates. C restores balance.
Anchoring: the first number or early choice exerts gravitational pull. By making A explicit and then subjecting it to B and C, you weaken the anchor.
Asymmetric evidence: you overweight signals you “control” and underweight external constraints. Likelihood B asks you to separate agency from environment.
A simple ruler for thresholds
You do not need second-decimal precision. Many decisions improve with three bands and a bias toward action:
Green (≥70%): continue, compound, or double down.
Yellow (50–69%): gather targeted evidence, run a pilot, or stage-gate the decision.
Red (<50%): pivot, pause, or try a safer alternative.
The Umbrella Example: A No-Math Scaffold
Imagine you arrive at an outdoor concert. The morning forecast suggested rain was unlikely. As you step out of your car, you notice dark, fast-moving clouds and a cool gust that smells like rain. You pause and reach for the umbrella in your back seat.
What just happened? You ran a tiny Bayesian update:
A = Prior belief: “Rain is unlikely.” The forecast anchored you low.
B = Likelihood: “If rain were coming, clouds like these and that gust would be typical.” High.
C = Baseline: “Clouds do appear on many dry days.” Moderate.
D = Posterior: “Rain is likely enough now to change behavior.” You carry the umbrella.
This update did not require numbers. You noticed the fit between the evidence and the rainy-day world (high B). You checked how common similar clouds are even on dry days (non-trivial C). You concluded your belief should increase enough to justify a low-cost action: carry the umbrella. The cost of being wrong is small; the benefit of being right is large. Bayesian thinking pairs naturally with asymmetric payoff logic.
Turn the dial: if the venue offers covered seating nearby, your action threshold rises—you might skip the umbrella. If the event is a long walk from parking with no cover, the threshold falls—you carry it even on weaker evidence. Bayes updates belief; payoffs turn an update into a move.
Why the baseline matters in daily life
Suppose your friend texts, “I hear thunder.” If you live where afternoon storms are frequent, thunder is common (high C), so a single rumble should not swing you to certainty. If you live in a dry climate where thunder is rare (low C), the same rumble carries more weight. Same evidence, different baseline, different update. This is why copying another person’s decision without their context is risky. Their C is not your C.
From words to a quick sketch
When the stakes are higher than umbrellas, jot a fast A/B/C on paper:
A (Prior): “Given last week’s forecast and season, rain chance low.”
B (Likelihood): “These clouds and gusts are typical of incoming rain.”
C (Baseline): “Clouds like this occur often, even without rain.”
D (Posterior): “Raise rain belief to the ‘carry umbrella’ band.”
That 30-second sketch avoids overreaction to one dramatic cue and underreaction to a meaningful pattern. It also creates a record you can learn from later: was the update too timid or too aggressive? Learning compounds when you leave breadcrumbs.
A Gentle Math Example: Medical Testing Without Panic
Medical testing is where base-rate neglect causes the most pain. A single positive result can trigger fear out of proportion to the true risk. Bayes realigns intuition with reality.
Set-up
A (Prior): The condition affects 1 in 100 people (1%).
B (Likelihood): If someone has the condition, the test flags positive 99% of the time.
False-positive rate: 5% among healthy people.
Question: “Given a positive test, what is the chance I really have the condition?”
Explain the pieces in words first
Prior A tells you rare is rare. Likelihood B says, “If I were sick, a positive test is very expected.” The baseline C asks, “Across everyone tested—sick and healthy—how often does a positive appear?” C matters because a fair number of healthy people will trigger positives by mistake when millions test.
Compute with frequencies, not only percentages
Think in a crowd of 10,000 people:
Expected to be sick: 1% of 10,000 = 100 people.
Of those 100, true positives at 99% ≈ 99.
Expected to be healthy: 9,900 people.
Of those 9,900, false positives at 5% ≈ 495.
Total positive tests ≈ 99 + 495 = 594.
Now update: among 594 positives, about 99 are true cases. Posterior D ≈ 99/594 ≈ 0.167, or 16.7%. The test increased belief dramatically (from 1% to ~17%) but did not make the belief certain. Next steps—confirmatory tests, clinical judgment, and context—make sense.
Write the formula transparently
A = P(EB) = 0.01
B = P(NE | EB) = 0.99
C = P(NE) = (0.01 × 0.99) + (0.99 × 0.05) = 0.0099 + 0.0495 = 0.0594
D = P(EB | NE) = (A × B) ÷ C = 0.0099 ÷ 0.0594 ≈ 0.167
Managing Information and Incentives
The Bayesian approach is not only about how you handle numbers—it is also about how you handle who provides the numbers. Every piece of evidence comes from somewhere, and those sources often carry their own incentives. Doctors, hospitals, insurers, pharmaceutical companies, and government programs all operate within economic systems that reward certain behaviors.
This does not mean the American medical system cannot be trusted. It means the system can be trusted to follow its incentives—and those incentives are not always aligned with your well-being. The Bayesian lens gives you a way to pause and ask:
What assumptions underlie the information I’ve been given?
What incentives might be shaping how this guidance is presented?
Should I dig deeper to validate the evidence before acting on it?
By explicitly testing priors, likelihoods, and baselines, you can make stronger decisions in a landscape where conflicts of interest are real. This does not replace medical expertise, but it makes you an active participant in filtering, questioning, and refining the evidence presented to you.
Why This Matters for Decisions
First, it prevents panic. A positive test moves you from “rare” to “possible,” not to “certain.” Second, it encourages the right next action: follow-up testing with a more specific method, or retesting to rule out lab error. Third, it reminds both clinicians and patients to consider pre-test probability—the prior (A), based on symptoms, exposure, and local prevalence—before interpreting a result.
How Changes in A, B, or C Shift the Outcome
Higher prevalence (A): During an outbreak, suppose 10 in 100 people have the condition. Then among 10,000 people, 1,000 are sick; 990 test positive; 9,000 are healthy; 450 test positive falsely; total positives 1,440; posterior ≈ 990/1,440 ≈ 69%. The same test now justifies stronger action because A rose.
Better specificity (lower false positives in C): If the false-positive rate drops from 5% to 1%, total positives shrink and the positive predictive value rises, even with the same A and B.
Lower sensitivity (B): If the test misses more true cases, a negative result is less reassuring, and the screening strategy must adapt.
Turning Bayes into Patient-Friendly Language
“You tested positive. Given how rare this condition usually is and how the test behaves, your chance is roughly one in six, not near certainty. The next step is a confirmatory test with fewer false positives. We will also weigh your symptoms and exposure to adjust the prior up or down. The goal is to move from one in six toward either much lower or much higher before making a treatment decision.”
A micro-checklist for any test or screen
What is the base rate (A) in my setting?
How expected is this evidence if the condition were present (B)?
How common is the evidence overall (C), including false alarms?
What action fits the updated belief (D): watchful waiting, confirmatory testing, or treatment?
What is the payoff asymmetry if I act now versus wait?
Practical notes for leaders and analysts
Replace “significant” with “useful.” A statistically significant blip may be common noise (high C). Ask how the signal changes behavior given payoffs.
Visualize frequency trees. People grasp counts faster than conditional percentages.
Separate estimation from decision. Estimation updates D; decision integrates D with costs, benefits, and timing.
Deepening the Baseline (C): a closer look
Baseline Evidence (C) is the least intuitive step because we prefer stories to statistics. A vivid anecdote feels rare even when it is common, and our attention confuses “memorable” with “meaningful.” To tame this bias, translate the evidence into a world of many trials. Ask: “If I watched 100, 1,000, or 10,000 runs, how often would I see this evidence—regardless of which belief is true?” This large-numbers frame shows whether the evidence is a black swan or a backyard pigeon. Outliers remain, but the frame keeps us from overreacting to them.
Here is where mathematical intuition matters. Because Baseline Evidence sits in the denominator, it acts like a lever on how much your Prior Belief (A) and the Likelihood of the New Evidence (B) transfer into your Posterior Belief (D).
If the Baseline Evidence probability is high and stable, the lever is steady. Your Posterior Belief mostly reflects the straightforward interaction of your prior and the likelihood.
But if the Baseline Evidence probability is low—or falls sharply—the lever shifts. Suddenly, the same prior and likelihood produce a much larger swing in your posterior. In this case, the rarity of the evidence magnifies the update, sometimes dramatically so.
The lesson: Baseline Evidence tells you when to update cautiously and when to update boldly. If the evidence appears often across multiple explanations, it is a weak discriminator—you should move slowly. If it appears rarely except under your belief, it is a strong discriminator—you should move decisively.
Three micro-exercises to build intuition
Hiring signal. A candidate arrives with a glowing reference. A: your belief the candidate will succeed based on the rest of the packet. B: if the candidate is excellent, references like this are common. C: glowing references are also common for average candidates. D: do not over-update unless the reference speaks to unique, job-critical behavior with examples.
Investment headline. A stock jumps 5% on “breakthrough news.” A: your belief about the firm’s long-run earnings power. B: if true breakthroughs occur, a jump is expected. C: headlines labeled “breakthrough” appear weekly across the market; most do not change the base trajectory. D: unless A was already high, you log the news, not chase it.
Health habit. A single week of perfect sleep and diet makes you feel amazing. A: your belief the habit shift helps. B: if the habit works, feeling great is expected. C: short streaks often occur due to noise (lighter calendar, seasonal mood). D: extend the trial to four weeks before major conclusions.
Quantifying without numbers
Sometimes numbers exist but do not transfer well to your case. In those moments, use coarse bands:
A (Prior): low / medium / high confidence.
B (Likelihood): evidence looks off-brand / neutral / on-brand for the belief.
C (Baseline): rare / occasional / common in your environment.
D (Posterior): move down a band / hold / move up a band.
This coarse update is superior to no update. It keeps you honest about direction even when precision is unavailable. Later, when better data arrives, you can refine the bands into percentages.
Payoff asymmetry: why updates are not the whole story
Bayes estimates belief. Decisions blend belief with payoffs and timing. A small belief increase can still justify action when upside dominates downside. Conversely, a large belief increase can wait if the option value of delay is high. Two quick examples:
Umbrella again: costs little to carry, saves a drenched evening. Act on modest belief.
Surgery decision: even with a high belief in benefit, you may stage more diagnostics if the procedure carries serious risk. You harvest option value by delaying until information improves.
Ethical guardrails for updates
Steel-man the other side. Before updating, state the best case for the belief you oppose. If your update survives that test, it is more credible.
Separate observation from judgment. Record what happened, then interpret. Do not let labels pre-color the facts.
Disclose uncertainty when others rely on your conclusion. “Here is my updated view and what would change it.” Intellectual humility builds trust.
A one-page template you can photocopy
Belief: _____________________________Purpose of decision: __________________Date: __________ Next review: ________
A — Prior (P(EB)):
Why this prior makes sense: _________________________________________
Confidence band (Low/Med/High): ____
B — Likelihood (P(NE | EB)):
Why this evidence fits or clashes: ___________________________________
Band (Off-brand/Neutral/On-brand): ____
C — Baseline (P(NE)):
How common is this evidence broadly? ________________________________
Band (Rare/Occasional/Common): ____
D — Posterior (P(EB | NE)):
Direction (Down/Hold/Up): ____
New band: ____
Action linked to band: ______________________________________________
Next trigger to revisit (time or event): ___________________________
Build culture around updates
If you lead a team, normalize updates. Hold a monthly “belief review” where you pick two core assumptions and run the A/B/C/D cycle out loud. Reward clean revisions over stubborn consistency. Create a small trophy for the “Best Update of the Month.” Gamify humility. Over time your team will ship better products, exit dead ends sooner, and trust one another more because they see how decisions evolve.
A closing thought for Part 1
The essence of belief updating is not perfection—it is disciplined adaptability. The framework does not demand exact numbers or flawless calculations. It demands that you pause, clarify your priors, test evidence against them, and recognize when change is needed. Good enough is powerful enough. By making decisions, testing them, and updating over time, you grow more accurate and more resilient.
Bayesian updating is not a parlor trick. It is a way to respect what you know, listen to what the world tells you next, and translate the combination into behavior. When you practice on umbrellas and lab tests, you train a reflex you can rely on when the stakes rise.
In Part 2, we will stress-test the framework against counterintuitive puzzles, then apply it to a consequential career decision where emotions run high and the costs of waiting are real.
Resources for the Curious
Box, George E.P. Empirical Model-Building and Response Surfaces. Wiley, 1987.
Dalio, Ray. Principles: Life and Work. Simon & Schuster, 2017.
Dweck, Carol. Mindset: The New Psychology of Success. Random House, 2006.
Epictetus. Enchiridion. 125 CE.
Hemingway, Ernest. The Sun Also Rises. Charles Scribner’s Sons, 1926.
Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
Rumsfeld, Donald. Known and Unknown: A Memoir. Sentinel, 2011.
Sapolsky, Robert. Determined: A Science of Life Without Free Will. Penguin Press, 2023.
Tetlock, Philip, and Dan Gardner. Superforecasting: The Art and Science of Prediction. Crown, 2015.
Jeff Hulett’s Related Articles:
Hulett, Jeff. “Challenging Our Beliefs: Expressing our free will and how to be Bayesian in our day-to-day life.” The Curiosity Vine, 2023.
Hulett, Jeff. “Changing Our Mind.” The Curiosity Vine, 2023.
Hulett, Jeff. “Nurture Your Numbers: Learning the language of data is your Information Age superpower.” The Curiosity Vine, 2023.
Hulett, Jeff. “How To Overcome The AI: Making the best decisions in our data-saturated world.” The Curiosity Vine, 2023.
Hulett, Jeff. “Our World in Data, Our Reality in Moments.” The Curiosity Vine, 2023.
Hulett, Jeff. “Solving the Decision-Making Crisis: Making the most of our free will.” The Curiosity Vine, 2023.
Hulett, Jeff. Making Choices, Making Money: Your Guide to Making Confident Financial Decisions. Definitive Choice, 2022.
Comments