When Maps Melt: The Limits of Knowledge in Decision-Making
- Jeff Hulett

- Oct 1, 2025
- 12 min read
Updated: Apr 2

At Personal Finance Reimagined (PFR) we often remind students and entrepreneurs of an essential point: Good decisions depend on clear thinking, yet one recurring confusion is between probability and frequency. The error arises because historical frequencies—what has happened—are often incorrectly treated as certain probabilities for the future. As the physicist E.T. Jaynes suggested, frequency is merely an observation of the past terrain, while probability is the map we use to navigate the future. Our past knowledge of the territory is always incomplete, meaning the future-focused map we rely on is inherently flawed. By understanding this crucial distinction, we can refine our decision-making process and ensure our map is effective for guiding long-term success.
About the author: Jeff Hulett leads Personal Finance Reimagined, a decision-making and financial education organization. He teaches personal finance at James Madison University and provides personal finance seminars. Check out his book -- Making Choices, Making Money: Your Guide to Making Confident Financial Decisions.
Jeff is a career banker, data scientist, behavioral economist, and choice architect. Jeff has held banking and consulting leadership roles at Wells Fargo, Citibank, KPMG, and IBM.
What It Is
Facing the future Probability is the logic of uncertainty — a forward-looking framework helping us reason about what might happen next. It reflects how plausible an outcome is given what we know today. Probability is not a physical property of the world but a tool for making informed decisions about the future. Errors are indicated by a lack of accuracy.
Facing the past Frequency, by contrast, is a subset of data helping us interpret the past. It is backward-looking — an empirical record of how often something has happened in repeated trials or observations. Frequency is an understanding of the past. Errors are indicated by a lack of precision.
Frequency informs us about what has been, while probability guides us toward what could be. The difference between the two lies in the direction of our gaze: frequency looks behind us to measure what was; probability looks ahead to navigate what will be.
Statisticians—and I am one!—are famous for using p-values or other measures of precision to validate their findings, often saying things like, "The p-value < .05 is the evidence against the assumption nothing is happening." This is the confusing language of rejecting the null hypothesis to back into the thing we are trying to understand with a reasonable amount of precision. Once we get over the backward-sounding language, the challenge is that this confuses our precise past view of the terrain for an accurate map of the future. In other words, statisticians can help us be incredibly precise about not getting to where we want to go!
Think about the weather. When a meteorologist says there is a 40% chance of rain, they are suggesting a forward probability—a forecast grounded in knowledge and models. This forecast is constantly updated because new and different information is always available. That is why the best forecast of all—the weather in the next ten seconds—is achieved simply by looking outside. If we check all the days where that initial 40% forecast was made and find it rained on 38 out of 100, we are observing a historical frequency. One describes belief, the other records experience. One predicts the variable future, the other records the fixed past.
Why It Matters: The Four Nevers and The Limits to Knowledge in Human Affairs
This distinction is not just philosophical — it shapes how we interpret evidence and make choices. The Four Nevers define the core informational challenges in decision-making and prediction within complex, real-world systems. They explain why simply extrapolating from historical frequency or current data is inherently flawed, necessitating a framework built on humility, adaptability, and continuous inquiry. As we will see, 1) bias impacting decision-making often results from the 4 Nevers and 2) bias is less the intended result of some bad actors, but more the emergent result of how all actors respond to the environment's incentives and constraints.
1. Knowledge is Never Complete
Rumsfeld's Never: Donald Rumsfeld famously identified the challenge of unknown unknowns. This vertical deficit ensures no individual possesses a high-resolution map of the entire world. This dimension is oriented toward a single person’s knowledge perspective over the vertical axis of time: the deep, longitudinal path from the past, through the shifting present, and into the unmanifested future.
No matter the resolution of our current data, it remains a surface-level map that fails to capture the hidden depth of reality. A dataset can never fully capture the intentions, private motivations, or future contingencies that have not yet manifested. These unobservable factors exist beneath the surface of any observation, rendering current knowledge inherently incomplete.
In the context of the cone of uncertainty, this vertical deficit creates the boundaries of what a person deems possible. Daniel Kahneman noted that individuals often suffer from The Illusion of Understanding. We naturally construct simple, coherent narratives from a few sparse facts along our personal timeline, which makes our incomplete maps feel far more finished than they actually are. We mistake what we can easily imagine for what is likely to happen.
Example: Imagine an analyst predicting a stock’s future price. The analyst has access to all public data: earnings, price movements, and trends. However, the true outcome is dictated by vertical "unknown unknowns" emerging from the depth of the company’s timeline. This is not just about what a CEO knows; it is about what no one can yet see.
Perhaps a critical chemical reaction in a new battery prototype is currently stable but will unexpectedly degrade after its 1,000th cycle—an event that has not yet occurred in any lab. Or perhaps a sudden, once-in-a-century geopolitical shift is currently brewing in an unrelated industry that will eventually collapse the company’s supply chain. These variables are not merely "uncollected" from other people; they are unmanifested. They exist in the vertical depth of the future, rendering any current map of the world inherently surface-level and incomplete.
2. Knowledge is Never Static
Goodhart's Never: Goodhart’s Law reveals the act of measuring a system changes the system itself. This principle establishes two facts: first, the world is constantly changing; second, and more critically, people are not independent agents like simple statistical data points. Unlike physical processes, human agents observe, interpret, and react to changes in their environment, policies, or measurements. This makes knowledge ephemeral and often self-defeating: the very act of setting a goal for a system changes the behavior within it.
This dynamism is where Goodhart’s Law manifests:
"When a measure becomes a target, it ceases to be a good measure."
There is a powerful analogue here to the Heisenberg Uncertainty Principle in physics. In the same way the act of observing a particle's position fundamentally obscures its momentum, the act of observing or targeting a human system fundamentally changes its trajectory. Both concepts define inherent limits to knowledge: the observer is never truly separate from the observed.
Example: Consider a major financial institution trying to boost performance by setting a strict Key Performance Indicator (KPI): "Every customer must have eight banking products." The executive committee assumes historical data reflecting product adoption represents stable, future demand. However, reactive human dynamism kicks in. Bank employees are not passive data points: they are adaptive agents reacting to an aggressive target. To meet the quota, employees might strategically open unauthorized accounts or prioritize quantity over ethical service. The initial "knowledge" indicating a high product-per-customer frequency equals sales efficiency becomes obsolete the moment it is targeted. The underlying goal—true customer service and profitable growth—is subverted by the strategic, interdependent behavior of the people attempting to satisfy the metric. Knowledge is never static because human agents are always "gaming" the map.
3. Knowledge is Never Centralized
Hayek's (Horizontal) Never: Knowledge is Never Centralized: The Horizontal Silo
F.A. Hayek argued vital information exists only in dispersed, localized forms. While the vertical deficit deals with an individual's depth over time, the horizontal silo exists across millions of individuals, locations, and peer groups in the present moment. This principle establishes that while the information necessary to understand a system may exist, it is distributed horizontally at the edges of the network. Hayek famously observed:
"The knowledge of the circumstances of which we must make use never exists in concentrated or integrated form."
Individual rationality depends on the specific horizontal silo a person occupies. This localized, specific knowledge cannot be effectively synthesized by a central algorithm without losing its vital, tacit context. A foreman might recognize a machine's unique sound, or a local merchant may sense a neighborhood's shifting mood. These insights are horizontal; they are known by peers on the ground but are invisible to a central authority. Because no two people stand in the exact same spot in the present, no two people share the same rational priorities.
Example: A retail chain’s central office decides to liquidate inventory in a specific region based on a "macro" report showing a local economic downturn. However, following Hayek’s logic, the local store managers know something the central computer does not: a major new factory just broke ground nearby, and the town is actually on the verge of a boom. The "knowledge" exists, but it is siloed at the edge of the network. Because this information is decentralized, the central authority makes a decision that is logically "correct" based on its aggregate data, but practically wrong based on localized reality.
4. Knowledge is Never Invariant
Kahneman's Never: Daniel Kahneman demonstrated the individual is not a fixed or consistent decision-maker. This principle asserts that human behavior and the data it produces are fundamentally inconsistent because the individual is not a "consistent calculator." Even when presented with the exact same facts, our judgment varies wildly based on psychological context, emotional state, and how information is framed. We are not "Econs"—rational actors with stable, laboratory-grade preferences—but are instead subject to cognitive biases, heuristics, and Noise. This lack of invariance means that knowledge about a past choice is often an unreliable predictor of a future choice. Our internal map for decision-making shifts depending on our internal and external environment.
Example: A charitable organization uses a "Propensity to Give" model to categorize "Donor B" as a "High-Altruism" individual based on years of consistent donations. The model labels them a "good person" who reliably puts others first. However, on a rainy Tuesday, Donor B is fifteen minutes late for a high-stakes job interview. They are currently focused on the negative framing of a potential career failure. When a person in clear need asks for help on the street, Donor B brushes past them.
The model would categorize this act as "Bad" or "Selfish," but Donor B hasn't fundamentally changed. Rather, their internal filter was anchored to a stressful time constraint, a phenomenon known as the Good Samaritan Effect. The data did not capture a fixed moral trait; it captured a snapshot of an individual whose willingness to help is variable and highly sensitive to their immediate situational context. Knowledge is never invariant because the human mind is a shifting landscape of perception.

When the Nevers converge: The Bastiat Test
The systemic challenge of the Four Nevers is robustly captured by Frédéric Bastiat's famous Broken Window Fallacy. Bastiat, a 19th-century French economist and legislator, used this parable to critique superficial thinking by forcing the reader to account for unseen consequences. His "seen and unseen" logic remains the gold standard for economic and rational testing: if a theory cannot account for the unobservable, it fails the test of time.
In the parable, a vandal breaks a baker's window. Immediate observers conclude this destruction is a net gain: the baker must pay the glazier, which stimulates the glazier's trade and, by extension, the economy. This simple, "seen" observation fails the Bastiat Test because it ignores the Four Nevers. Using the Nevers to analyze this fallacy confirms their relevance as a diagnostic tool for diverse rationality:
Never Complete: The observers only see the immediate transaction (the seen). They fail to see the opportunity cost (the unseen)—what the baker would have purchased instead (e.g., shoes from the cobbler). The frequency of economic activity is rendered incomplete because it ignores this crucial negative counterfactual.
Never Static: The destruction of the window is a permanent loss of capital that corrupts the market's stability. Because the baker must divert capital to repair the window, their costs increase. To maintain the equilibrium, the baker must either raise prices or reduce supply, creating a net economic negative for the broader market. This reactive, non-linear behavior demonstrates that the system has shifted, and the total wealth has been reduced.
Never Centralized: The flawed conclusion arises because the glazier's gain is visible and centralized in the transaction, while the cobbler's loss is distributed and invisible across the broader market. No single observer holds all the local knowledge required to calculate the true aggregate economic impact.
Never Invariant: Human judgment is subject to framing. Observers "anchor" on the visible exchange of money, perceiving it as growth. If the same value were lost without a visible transaction—like tools simply rusting—human perception would shift, correctly identifying it as a loss.
This scenario demonstrates how a frequency analysis based only on what is seen fails because the data is simultaneously incomplete, dynamically unstable, and decentralized.
Map or Territory?
These realities mean frequency is inherently limited. It looks backward, describing what has happened, but cannot fully anticipate how knowledge and behavior evolve. To make sound decisions, we must use frequency as a starting point but go further — applying probabilistic reasoning to think forward and adapt to change.
Healthcare: A doctor’s estimate you have a 10% chance of a condition, which is not the same as saying 10% of patients have it. The first depends on your symptoms, history, and test results; the second is population data.
Markets: Investors often mistake past market returns (frequencies) for future probabilities, ignoring shifting incentives and hidden risks.
Policy: Treating probability as nothing more than frequency overlooks F.A. Hayek’s key insight — knowledge is dispersed. Decision-makers must reason with incomplete information, not just tally historical counts.
In each case, confusing the map with the territory — frequency with probability — leads to costly mistakes. Recognizing the 4 Nevers keeps us honest about what data can tell us and disciplined about using probability to bridge the gap between what was and what will be.
How to Best Think About It
At PFR, we teach a decision-first process:
Frame with probability: Use probability as your reasoning map. Ask, given what I know (past), how plausible is this outcome (future)?
Inform with frequency: Let real-world data refine your map. Observed outcomes provide feedback, not the whole story.
Respect their difference: Probability is epistemic — about what we know and do not know. Frequency is empirical — about what we can observe, but subject to the 4 nevers.
Use a consistent, repeatable decision process: We arm our students and clients with Definitive Choice or similar choice architecture tools. This helps build their decision system.
Maintain a belief updating mindset: Because of the 4 Nevers, at any point in time, our knowledge is incomplete. Treat beliefs the way cognitive researcher and "superforecaster" Philip Tetlock does: "...beliefs are hypotheses to be tested, not treasures to be guarded."
As with a map and terrain, both are indispensable. The map without the territory is speculation; the territory without the map is noise. The challenge today is that advanced observation tools like AI make the past-focused terrain incredibly vivid and detailed, tempting us to let this data overshadow our judgment. The availability of highly detailed, past-focused data makes us more likely to confuse confidence with hubris. This precision about what was ultimately blinds us to the great incompleteness of our knowledge for predicting what will be, a failure explicitly demonstrated by the Four Nevers.
Bayesian Inference is the best approach to managing past frequencies in a way to make future probabilities. For a deeper dive, please check out:
Takeaway: Probability guides future expectations. Frequency records past outcomes. The "4 Nevers" suggest they are never the same. Confusing them blinds us. Keeping them distinct helps us make decisions using past records, adapting to uncertainty, honoring dispersed knowledge, and, in the spirit of Hayek, letting reality guide our choices without pretending we can centrally control it.
GenAI: The Precision Trap
Generative AI (GenAI) is the ultimate engine of Frequency and Precision—it processes and synthesizes massive amounts of historical data (the territory) with remarkable consistency. However, this power dramatically amplifies the Four Nevers, intensifying the risk of confusing the map with the territory.
Never Complete (Rumsfeld's Never): GenAI's fluency in generating plausible content based on its training data creates a seductive illusion of comprehensive knowledge, masking the crucial Unknown Unknowns it cannot access.
Never Static (Goodhart's Never): By accelerating the dissemination of information and insights, GenAI speeds up the human feedback loop, causing "knowledge" to become obsolete faster, accelerating the self-defeating nature of measurement as a target.
Never Centralized (Hayek's Never): GenAI excels at aggregating explicit data, reinforcing the Pretence of Knowledge and making it easier to overlook the vital, decentralized, and tacit insights (e.g., local market knowledge) held across the system.
Never Invariant (Kahneman's Never): GenAI outputs are highly sensitive to "prompt engineering" and framing. This mirrors human bias, where a query can yield wildly different results based on subtle shifts in context, making the AI's "knowledge" inconsistent and subject to the user's psychological anchoring.
When GenAI processes data that is inevitably incomplete, dynamic, and decentralized, it remains oblivious to what is missing. The resulting output is consistently generated ("precise") but strategically flawed ("inaccurate").
The optimal partnership requires a clear division of labor: GenAI supplies the precision; the human partner supplies the accuracy. We must use GenAI outputs as detailed observations of the past (Frequency) to inform our strategic reasoning about the future (Probability). By managing the Four Nevers, the human decision-maker ensures that the map we build leads to success, rather than simply offering a detailed, yet incomplete, view of where we have been.
For a deeper dive into managing this critical partnership—and why confusing the machine’s precision with the human’s responsibility for accuracy is the ultimate mistake—check out the related articles:
Resources for the Curious
Bastiat, Frédéric. That Which Is Seen, and That Which Is Not Seen. Originally published 1850.
Goodhart, Charles A.E. "Problems of Monetary Management: The UK Experience." In Papers in Monetary Economics, Volume I. Reserve Bank of Australia, 1975.
Hayek, F. A. “The Pretence of Knowledge.” Nobel Prize Lecture, 1974.
Heisenberg, Werner. Physics and Philosophy: The Revolution in Modern Science. Harper & Brothers, 1958.
Hulett, Jeff. Making Choices, Making Money. Personal Finance Reimagined, 2022.
Jaynes, E. T. Probability Theory: The Logic of Science. Cambridge University Press, 2003.
Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.
Rumsfeld, Donald. Department of Defense News Briefing. February 12, 2002.
Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Originally published 1776.
Tetlock, Philip E. and Dan Gardner. Superforecasting: The Art and Science of Prediction. Crown, 2015.




Comments