Designing Humanity: What the Eugenics Era Warns Us About Building the GenAI Future
- Jeff Hulett

- Oct 7
- 5 min read
Updated: Oct 10

The Peril of “Scientism” — When Science Pretends to Know What It Cannot
Friedrich A. Hayek once warned us: “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This humility toward knowledge was absent among the architects of eugenics. In the late 19th and early 20th centuries, scientists and policymakers—buoyed by misplaced faith in measurement and progress—convinced themselves human worth could be quantified, ranked, and improved through selective breeding and sterilization.
This was scientism, not science—the illusion human behavior could be engineered by central planners with sufficient data and authority. It was, as Hayek described in his 1974 Nobel lecture The Pretence of Knowledge, the moment when science “becomes an instrument of tyranny,” detached from humility and moral constraint.
The consequences were catastrophic. Eugenics policies, justified as modern and rational, led to forced sterilizations in American hospitals, racial hierarchies in British universities, and ultimately the mechanized horror of the Nazi death camps. Each step was wrapped in the language of statistics and progress.
“In machine learning, theory is optional; data is destiny.”
— Common data-science maxim, inspired by Big Data pioneers
Today, the same tension reemerges in the GenAI era. Artificial intelligence, like eugenics before it, offers dazzling promise—the ability to predict, optimize, and personalize at scale. Yet beneath the promise lies a familiar danger—the temptation to see human complexity as a problem to solve instead of a mystery to understand. When algorithms begin to define fairness, worth, or truth, we risk repeating the eugenic fallacy in digital form—outsourcing moral judgment to data and design.
Yet, over time, society learns. Science corrects itself—but only when humanity reclaims its moral compass. This timeline follows the long arc—from Francis Galton’s early ambitions at University College London (UCL), through Virginia’s infamous Buck v. Bell decision, to today’s era of AI ethics and institutional reckoning. It shows how knowledge can mislead when unanchored from virtue, and how, even across centuries, the pursuit of truth bends—slowly but steadily—toward moral accuracy.
Timeline: The Long Shadow of Eugenics – From Scientific Prestige to Moral Reckoning (1884–2021)
The previous timeline should scare you. The horrifying reality is: It took more than a century and cost millions of lives for the full recognition of how eugenics was enabled by academia and governments in the name of science. It reveals how moral blindness can hide behind the banner of innovation and how long it can take for society to confront the harm done in the pursuit of “progress.”
Eugenics reminds us knowledge without humility can betray its purpose. What began as a quest for human improvement devolved into oppression because science mistook data for wisdom. Today, as we enter the GenAI era, the parallel is striking. Artificial intelligence, like eugenics once did, dazzles as a shiny new toy—offering the promise and illusion of precision, optimization, and prediction. Yet beneath the sleek interfaces and rapid learning cycles, its statistical DNA remains largely the same.
The foundations of modern AI still rest on the correlational logic developed by Galton, Pearson, and Fisher—methods built to measure association, not causation. While speed, scale, and language fluency have advanced dramatically, AI systems, at their core, attempt to predict the future by extrapolating the past. They can only “know what they know,” which means their predictions will often fail in the face of the dynamic, unknowable realities of human life. As scholars like Judea Pearl remind us, causation remains the unsolved problem of artificial intelligence. In today’s hyperconnected world—where feedback loops link human behavior, markets, and culture—the limits of prediction are even more profound, underscoring the need for human oversight, ethical forbearance, and humility.
In the next graphic, what "causes" the pickup truck to move? Is it 1) the dunce on the pickup, the person behind the pickup, or 2) gravity?
(The answer: Gravity)
Then, notice 3) how the pickup truck data can be distorted by the perspective of those owning the data. The camera is turned 45 degrees to make the pickup appear like it is on flat ground. Yikes!

AI makes it easier to get predictions wrong—errors of omission and confirmation bias are the juice that keeps bias flowing.
This is not an indictment of progress but a call for perspective. Artificial Intelligence holds extraordinary potential—to expand human creativity, accelerate discovery, and serve as a powerful tool for social good. Yet, like any transformative technology, it carries an equally great capacity for harm when used without humility or restraint. Without moral awareness, AI risks repeating eugenics’ central delusion: the belief complex human systems can be engineered from above through data and design. UCL’s apology and Virginia’s reckoning show how slowly moral clarity follows scientific discovery. The challenge before us is not merely to innovate, but to use intelligence—human and artificial—with wisdom, humility, and individual responsibility.
True progress will not come from the power of our algorithms but from the wisdom to question what they measure, whom they serve, and when to say no. The difference between science and scientism has never mattered more.


Background checks don’t reach endlessly into the past, but they can still dig up more than most applicants expect. The time frame usually depends on the type of record and state law — many states follow a seven-year reporting limit for arrests, civil judgments, and other adverse information under the FCRA. The problem is that some background check companies still recycle old or inaccurate data from outdated databases, which can make it look like something long resolved is still active. When that happens, it’s not just unfair — it can be illegal. There’s a detailed breakdown here that explains the limits and your rights under the FCRA: https://consumerattorneys.com/article/how-far-back-does-a-background-check-go. If a background check pulls records older than it should or includes…