How AI Made the Future Unthinkable

Photo: Nathan Laine/Bloomberg/Getty Images

In June of last year, Leopold Aschenbrenner, a 24-year-old former OpenAI employee, published Situational Awareness: The Decade Ahead, an urgent manifesto about artificial intelligence. “You can see the future first in San Francisco. Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans,” he wrote. “The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.” What does that mean? Leave that question to your wildest dreams or nightmares, which Aschenbrenner summoned ably: total liberation for humanity or total repression or extinction.

The road to AGI, or artificial general intelligence, Aschenbrenner argued, has already been paved. AI-development “trend lines” have been accurate and will hold into the future; we’re “running out of benchmarks,” and soon “agent”-like helpers will be able to do the work of an AI engineer, speeding development of the technology past “human level” to “vastly superhuman.” First come $10 trillion companies, then a great-power conflict. “If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war,” he wrote. “But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness … Let me tell you what we see.”

Less than a year later, Aschenbrenner has more company, and the AGI brand is strong. AI executives, some of whom have been talking in similar terms for years, are getting bolder. “We are now confident we know how to build AGI as we have traditionally understood it,” wrote Sam Altman two months ago. Anthropic’s Dario Amodei, who prefers the term powerful AI and the image of a “country of geniuses in a data center,” suggested such a thing is attainable by 2026, “though there are also ways it could take much longer.” His own manifesto is slightly more hopeful, suggesting the technology could “structurally advance human health and alleviate poverty” but cautioning that he sees “no strong reason to believe AI will preferentially or structurally advance democracy and peace.” (For a dose of purely optimistic AGI-ism, here’s Ray Kurzweil last year: “We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness … In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through aging we get back from scientific progress.”)

Outside the labs, the population of people who may consider themselves situationally aware has expanded, albeit unevenly, as new research and products have reached the public. At the New York Times, Ezra Klein framed a recent conversation with a former Biden administration AI adviser thus: “The Government Knows A.G.I. Is Coming.” “For the last couple of months, I have had this strange experience: Person after person — from artificial intelligence labs, from government — has been coming to me saying: It’s really about to happen,” Klein wrote. A week later, columnist Kevin Roose echoed Aschenbrenner’s premise even more directly:

In San Francisco, where I’m based, the idea of A.G.I. isn’t fringe or exotic. People here talk about “feeling the A.G.I.,” and building smarter-than-human A.I. systems has become the explicit goal of some of Silicon Valley’s biggest companies. Every week, I meet engineers and entrepreneurs working on A.I. who tell me that change — big change, world-shaking change, the kind of transformation we’ve never seen before — is just around the corner.

Like Roose and Klein, I suspect that quite a few people with strongly held opinions about AI would be surprised to see what late LLM models can do today and that more such surprises will arrive, some shocking and destabilizing, in the workplace and beyond. I’m also open to the claim that, while instructive in many ways, early experiences of broken chatbots and absurd AI answers and summaries, not to mention the staggering proliferation of AI spam and slop, offer easy ways to write off technologies that are improving weekly. But I am not quite “feeling the AGI,” at least in the ways outlined above. My own resistance is easy enough to explain and, likewise, for a reasonable person to disregard. I don’t and couldn’t work in an AI lab as a research scientist, but rather I pass my days in an exposed and shrinking category of tech-adjacent knowledge work that demands — and instills! — heavy skepticism of the tech industry’s claims about itself. I’m also a lifelong sci-fi fan who grew up reading and rereading dazzling Vernor Vinge books about the singularity, wrung a thesis out of representations of machine consciousness in literature (we’ve been worried it for a while), and allocated many hours of my one and only life to metabolizing arguments about superintelligence on the rationalist forum and longtime AI hub LessWrong, albeit in less heady times. Years later, I interviewed Kurzweil about his strange and short-lived e-book start-up while he took down handfuls of longevity supplements; now, Mr. Singularity is back in the spotlight, wondering if his time has come. In other words, I’ve been sitting with the AGI story for a while, and I’m probably unusually inclined to see it in narrative terms: as the description of a dream, a nightmare, or an ambition, rather than as a useful framework for understanding even incredibly powerful new tools.

Lost in AGI dread and gleeful anticipation is the fact that, while the technologies driving the discussion around AGI are novel and new, the stories we’re hearing about them are not. The AGI story today’s elites embrace is a retelling of the AI story embraced by the elites of yesteryear and the automation story of their predecessors. It’s fair to say we’re in uncharted technological territory, but that is also a truism. Where else would we be? As workers or as politicians, regulators, industrialists, or members of the curious general public, we have all previously confronted technological breakthroughs and moments of political and economic uncertainty. All those moments created opportunities for narrative invention, too. In its more cynical deployments, AGI, like AI before it, is a marketing story masquerading as a descriptive term; it’s a bid to rebrand — and narrow down and direct — our collective idea of the future

Maybe this doesn’t matter, this time is completely different, and the end point of decades — centuries — of predictions about machine personification and eventual domination is finally at hand. History can’t tell us otherwise, and technological change really is coming (though, again, when isn’t it?). But it can make some helpful suggestions, and when it comes to stories like AGI, it has a few.

In his 2022 book Labor’s End, historian Jason Resnikoff re-creates a period in intellectual history that may feel, or at least sound, familiar:

“Hardly anybody is against automation,” wrote John Diebold. “As a matter of fact, nearly everybody is for it, because it is a word that implies progress” … A Ford manager credited with having invented the word called “automation” a “new concept — a new philosophy of manufacturing.” According to James Boggs, a Marxist Humanist and Black liberation theorist who worked on an automobile assembly line in Detroit for 20 years, “Automation is the greatest revolution that has taken place in human society since men stopped hunting and fishing and started to grow their own food,” and this required a “New Declaration of Human Rights” in which people were liberated from work. 

Daniel Bell, a sociologist and public intellectual, wrote … that industrial work “has lost its rationale in the capitalist industrial order.” The union leader Walter Reuther told Congress, “We believe that we are really standing on the threshold of a completely revolutionary change in the scientific and technological developments we have experienced … we believe that we are achieving the technology, the tools of production, that will enable us as free people to master our physical environment.”

Pioneering computer scientist Norbert Wiener, Resnikoff notes, saw ahead to a “second industrial revolution” that would “undoubtedly lead to the factory without employees,” while Hannah Arendt wrote of “the advent of automation, which in a few decades probably will empty the factories and liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage of necessity.” In 1960, JFK declared the prospect of mass automation to be a matter of urgent national importance to be considered carefully lest it become a “curse.”

This broad and rapid engagement with the new concept of automation was a response to both unsteady economic conditions and new and fast-moving developments in industrial processes, many accelerated by the war. It also corresponded with rising awareness of computers, which were suddenly capable of extraordinary, unintuitive, and narrowly superhuman things. GE, for example, told customers its data-processing systems could handle a wide range of sorting and bookkeeping tasks, “leaving the relatively small number of unusual operations to off-line human intervention where economics so dictate.” A 1955 IBM pitch both enticed and frightened white-collar management by invoking robotized assembly lines: “Factory automation, growing lustily, requires a corollary increase in office automation.” These companies weren’t just marketing new products; they were describing forces beyond their control and offering a way to side with them, or at least to buy insurance against them. (1955 was also the year that overlapping fields of study exploring frontier computing capabilities resulted in a new and durable coinage: artificial intelligence.)

As a story — and a brand — automation was simultaneously vague, expansive, and incredibly powerful. In 1958, after the country briefly fell into recession, The Nation suggested that the economic situation might soon be not just dire but unrecognizable and that urgent attention must be paid: “We are stumbling blindly into the automation era.” In “The Triple Revolution,” an open memorandum addressed to the president in 1964, a wide range of prominent signatories including technologists and activists warned of imminent “cybernation,” a term adopted and briefly popularized by Martin Luther King Jr. that demanded a complete rethinking of the economy and the government’s role in providing for citizens:

The cybernation revolution has been brought about by the combination of the computer and the automated self-regulating machine. This results in a system of almost unlimited productive capacity which requires progressively less human labor. Cybernation is already reorganizing the economic and social system to meet its own needs.

These sorts of proclamations are, in 2025, commonplace — replace automation with AI, AGI, or OpenAI’s Super Bowl ad for “the Intelligence Age.” Like automation, they are terms adopted and marketed not for their precision but their novelty and vagueness. Automation was a new “mechanization” stripped of the drearier aspects of the legacy of the industrial revolution (very human despair, declining labor share of capital) and re-enchanted with a sense of gravity, mystery, and potential (whirring machines in empty rooms). Mostly, it’s familiar as a discourse of inevitability. Automation represented, Resnikoff told me, a claim that “that future isn’t a political question — it’s just the unrolling of technological development in itself.”

In the 1950s, lawmakers alarmed by “ever new and startling developments in automation” reported in “not only the trade magazines but the mass-circulation popular magazines,” held hearings on the subject. The definition of automation, Resnikoff writes, was a “consistent concern” that haunted and ultimately helped neutralize the project. The panel concluded that “the economic significance of the automation movement is not to be judged or limited by the precision of its definition” and that, in any case, they were “clearly on the threshold of an industrial age.” Awed by the future yet unable to agree on how to describe it, the committee concluded that “no specific broad-gauge economic legislation appears to be called for,” appealing to “enlightened management” to mitigate potential displacement and harms and warning labor leaders against “a blind defense of the status quo.”

After much deliberation, in other words, the imminent remaking of the economy, and humanity’s place in the world, was reduced to an awareness campaign. Broad-based “automation” was just a matter of time. It wasn’t the government’s place to tell businesses how to handle it, and it wasn’t the businesses’ place to do anything but enable it to its maximum potential, just … carefully. Automation framed the future in terms that made asking for things in the present — marginally better terms for workers, for example — sound like a waste of energy. It was, Resnikoff suggests, an argument for abandoning work, and the workplace, as a contestable, organizable, political space. Why bother? The end of work is nigh.

Fast-forward to 2025 when widely covered and debated government committees have largely been supplanted by a novel technology called “podcasts,” and we encounter the Times’ Klein interrogating former Biden AI adviser Ben Buchanan along similar lines:

This gets to something I find frustrating in the policy conversation about AI.

You start the conversation about how the most transformative technology — perhaps in human history — is landing in a two- to three-year time frame. And you say: Wow, that seems like a really big deal. What should we do?

Klein continues, “Is this policy debate about anything? If it’s so fucking big but nobody can quite explain what it is we need to do or talk about — except for maybe export chip controls — are we just not thinking creatively enough?” Buchanan, who describes a reluctant and slow process of engagement with AI by the Biden administration, responds, “I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why.” That they find themselves at the same impasse as a congressional subcommittee from the Eisenhower administration is a little bit funny, but it’s also important. They’re describing — or at least responding to — the same sort of story in the same sort of way: an imminent and unstoppable big deal that changes everything and about which nothing can realistically be done.

AGI, like G-less AI, automation, and even mechanization, are indeed stories, but they’re also sequels: This time, the technology isn’t just inconceivable and inevitable; it’s anthropomorphized and given a will of its own. If mechanization conjured images of factories, automation conjured images of factories without people, and AI conjured humanoid machine assistants, AGI and ASI conjure an economy, and a wider world, in which humans are either made limitlessly rich and powerful by superhuman machines or dominated and subjugated (or perhaps even killed) by them (Industrial Revolution 3: The Robot Awakens). In imagining centralized machine authoritarianism in the future, AGI creates a sort of authoritarian, exclusionary discourse now. A narrative emerges in which the decisions of AGI stakeholders — AI firms, their investors, and maybe a few government leaders — are all that matter. The rest of us inhabit the roles of subject and audience but not author.

Even in its more optimistic usage, the term AGI still functions as a rhetorical black hole, ultimately consuming any larger argument into which it is incorporated with its overpowering premises: It’s coming; there’s a before and after; it’ll change everything; there’s nothing we can do about it; maybe, actually, it’ll be so smart that problem-solving will no longer be our problem. (This perhaps explains why interventions like Aschenbrenner’s, and their counterparts in media and elsewhere online, tend to skip ahead to final-battle geopolitical war-gaming with China for control over the technology — at least it’s something to talk about. If AGI is an enthusiastic exercise in sci-fi world-building, war is the afterthought of a plot.) Aschenbrenner concluded his manifesto with a tellingly claustrophobic reformulation of Pascal’s Wager, the philosopher’s 17th-century argument that you may as well believe in God: “At this point, you may think that I and all the other SF-folk are totally crazy. But consider, just for a moment: What if they’re right?” (In the spirit of taking things very seriously, one answer to this question would be “The government seizes Leopold Aschenbrenner’s superintelligence investment fund yesterday.”)

These formulations are very much a product of a broader political context. In the 1950s and early ’60s, it was plausible that a wide range of parties with different interests — industrialists, mainstream politicians, labor leaders, leftist theorists — might conjure futures from automation that were, if not utopian, not completely hopeless. At the very least, the business leaders who had seized the mantle of technological progress from the government were careful in how they talked about jobs. Private expressions of unease about labor and unemployment were in part what inspired Kurt Vonnegut to leave his public-relations job at GE to write his automation novel, Player Piano, but so were advanced machining technologies Vonnegut witnessed with his own eyes.

Today’s AI leaders, by contrast, have embraced the apocalypse as a marketing strategy, gesturing at future UBI while they wonder aloud if they’re bringing about the end of humanity, then the next week acting as though they hadn’t. In the process, they have successfully mainstreamed esoteric discourses about AGI and ASI while sidelining contemporary criticism of actual AI deployment. It’s not just about raising capital, a purpose for which AGI is an impossibly intoxicating (and, again, familiar) story. Within discussions about AI, what’s the point of getting bogged down in disputes over copyright or biased training data or even the quality and performance of currently deployed AI systems when everything is changing at an exponential rate? Beyond that, in the workplace, what’s the point of organizing or planning for the future at all? Beyond the workplace, with mindlike software just a few vivid dreams away from waking up, what’s the point of doing, well, anything? It’s fair to lament the rapid polarization of AI coverage and public opinion. I’d also suggest that the constant, knowing invocation of underexplained, variously defined, but obviously unstoppable “AGI” has a lot to do with it.

As I mentioned several years ago, it increasingly appears that humanity is a biological bootloader for digital superintelligence

— Elon Musk (@elonmusk) April 2, 2025

It could also help explain why some discourses downstream from AGI are even bleaker than their predecessors. At the beginning of the American Century, technological determinism could take the form of inevitable automation that might change everything, might take everyone’s jobs, and might make everyone’s lives easier. (Harnessable machines with narrowly superhuman abilities to move or even compute.) Now, near the end, when Americans are as despondent as they’ve ever been about solving hard collective problems, it takes the form of a robot that can improve itself until it regards its creators in the way its creators regard bugs (unharnessable machines with superhuman abilities full stop). A recent Pew survey found wide gaps between the attitudes of AI experts and the skeptical general public, which is worried above all about the degradation of work. Fixation on imminent AGI by the nominally more optimistic experts doesn’t cultivate understanding or highlight a way forward. It feeds despair. It skips over countless decisions and fights about how technology is applied and deployed.

You’re going to start seeing a lot more of this until Big Tech gets the message. pic.twitter.com/HTKSomwHYd

— Reid Southen (@Rahll) December 17, 2024

It has also become increasingly clear that AGI discourse, like automation discourses before it, is fundamentally a set of conversations between people who in some sense believe or want to believe they’re in charge and should decide what’s best for everyone else. Complicating matters is the fact that they perceive a threat to that status from AGI, which promises to be the new smartest guy in the room. This both limits the scope, utility, and reach of discussions about the underlying technology and helps explain just how panicked, strange, and disorienting they have become: In these versions of the story, it’s not just factory workers or office drones but the self-appointed cognitive elites who lose their place in the world too.

Those predicting dramatic short-term economic impacts of AI seem to have the impression that because AI is so powerful, disruptive, and general purpose, it will be adopted rapidly. But wait a second. The more advanced, general-purpose and disruptive a technology is, the more…

— Arvind Narayanan (@random_walker) February 20, 2025

I don’t mean to suggest we should seek too much comfort in historical worries about technology and change. Many specific claims embedded in the automation discourse were, in hindsight, obvious and correct. It’s true that many Americans are today employed in jobs that didn’t exist when automation was a new term and that the rapid adoption of computers far more advanced than most could conceive of when the term was new — which have far exceeded early definitions of artificial intelligence — didn’t coincide with total societal breakdown but rather single-digit unemployment rates. It’s also true that this process, which was technological but also inescapably political, cultural, and meaningfully contested at every stage, resulted in job displacement, reskilling and deskilling, regional success stories, and regional apocalypses.

What’s most important about the discourse around automation is that the one thing its diverse participants seemed to agree on — that the country was at the cusp of a sudden, nonnegotiable step change — was also the one thing about which they were furthest from correct. The future turned out to be more complicated than an upgrade cycle: For the next 25 years, American employment in manufacturing increased and held relatively steady for 20 more; now, however, American automotive-manufacturing jobs, to choose one example, are diminished in both quantity and quantity (they’re also sitting, albeit precariously, near a three-decade high). Now, manufacturing growth is nonunion; meanwhile, something close to true “lights-out” car factories are coming online in China as cars themselves turn into consumer electronics. The economy is larger and more productive but also more unequal.

America’s turn away from manufacturing and what came after were in part technological stories — by the ’90s, robots were actually eliminating auto-manufacturing jobs. It was also a story about trade, macroeconomic shocks, a massive service-sector boom, organized labor, and politics (e.g., your 401(k) this week). It was above all a story about thousands of decisions, choices, and fights, not fate. For Ford and companies like it, automation didn’t function as a prediction or a description so much as a statement of intention. Now, AGI is a repetition and an escalation. First comes the lights-out factory, then the lights-out office, then, finally, the lights-out mind. Any questions?

AI doomers: Can you imagine an advanced technology falling into the wrong hands, which would allow China to wreck our economy, throw markets into turmoil, interrupt manufacturing progress, disrupt our scientific infrastructure, and create global panic and confusion?

Me: Yes. pic.twitter.com/1EJ4RjjqDZ

— Derek Thompson (@DKThomp) April 9, 2025

As Tressie McMillan Cottom writes, stories like this aren’t “held to account for being accurate, only for being compelling.” AGI is a suspiciously compelling story with the additional disadvantage of shrouding its purported subject in mystery. Within the AI community, some have raised concerns about the focus on AGI. In February, a group of researchers released a paper warning against using AGI as “the north-star goal of AI research,” arguing that it represents a series of “traps” that “undermines our ability to choose effective goals.” In Science, another group of researchers has argued that the focus on AGI has led to a deeper misunderstanding of the actual potential and transformative potential of LLM-based tools, which should be understood not as “general-purpose technologies,” like the steam engine or electricity, but as “cultural and social technology” humans can use to organize, access, and manipulate vast quantities of information, akin to markets, state bureaucracies, or even the printed word (or, more modestly, technologies such as online search).

None of these is a dismissive evaluation of technological change. (To paraphrase Paul Ford, describing LLM-based AI as software, or understanding the rush of strange and versatile new tools as the results of “vector databases queried with tokens,” isn’t a way to dismiss them. It’s just a different way to say they’re incredible.) They simply insist that our current and future situations are necessarily knowable and negotiable, not simply imminent or imminently inscrutable.

Google shared a useful contribution this week in the form of an earnest, exhaustive report attempting to identify AI risks and mitigation strategies that takes and defends AGI as its core premise. It’s a thorough and fascinating document about responsible technological development and deployment. It also invites a helpful exercise: In many of the specific instances where the term AGI appears, automation would suffice, of course, but so would generic “powerful software that could be used to hurt people” or, surprisingly often and with little loss of meaning, technology or just progress, despite the report’s detailed explorations of AGI in the novel terms of alignment, misuse, bias, and harms.

We’re far enough along in the AGI discourse that, even for some of the people who have most effectively promulgated it, it’s losing its utility. Departed OpenAI co-founder Andrej Karpathy, no stranger to “feeling the AGI,” recently argued against a core tendency of AGI-ism, pointing at the actual diffusion of LLM-based software so far, describing (and allowing for) a situation in which popular AI tools produce “disproportionate benefit for regular people, while their impact is a lot more muted and lagging in corporations and governments.” In contrast to sci-fi narratives in which AI is a “top-secret government megabrain project wielded by the generals,” he instead describes “versatile but also shallow and fallible” tools like ChatGPT “appearing basically overnight and for free on a device already in everyone’s pocket.” For others, the brand is simply becoming exhausted and dissolving into incoherence. In a recent interview with Stratechery, Altman described AGI as “a term that I think has become almost completely devalued,” outlining a wide range of definitions. Asked how he thinks LLM-based tools will evolve, he suggests that OpenAI is “on the path” to software that can “truly create new things” but also that, actually, OpenAI has “overdelivered” on its promises and “people have just updated” — they’re already taking for granted capabilities that just a few years ago would have inspired awe and perhaps even a “world meltdown.” Altman says his instinct is that AI “just seeps through the economy and mostly kind of like eats things little by little and then faster and faster.” After all, he says, “that’s kind of standard for other tech evolutions.”

These are the words of someone running a fast-growing consumer-technology company with discernable goals, needs, and competitors, not someone worried or excited that he’s about to create God. The same week Altman gave this interview, OpenAI joined other tech companies in submitting its proposals for the White House Office of Science and Technology’s AI Action Plan. Here’s how it framed the state of play:

As America’s world-leading AI sector approaches artificial general intelligence (AGI), with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI.

There it is, back again! The fast-approaching threshold, the looming inevitability, the rapture, the storm. In conversation with a curious, critical interviewer, Altman dismisses AGI as a fuzzy story. When it’s time to stake a claim, back up a valuation, and ask for something from authorities — in this case, freedom from the copyright regime for training data — it’s an emergency. That, for OpenAI, is the value of “AGI.”

The AGI brand campaign has other uses with extraordinarily high stakes in the present and beyond the tech industry. DOGE is, above all, the manifestation of a right-wing dream to cripple the state and to commandeer what’s left. To the extent that it looks forward, however, it’s also a Musk-inflected AGI story put into action with real and current consequences. Who cares if we break the bureaucracy? Soon enough — inevitably, in fact — AI will be able to handle the messy work of governance and administration better than we can. Why isolate American industry from the rest of the world with massive tariffs? Because AGI robots will magically pick up the slack. Need to justify drastic cuts to Medicare or Medicaid? Just say “AI” and get out that bone saw. It’s a matter of time until all bets are off and the concerns of regular people will be well and truly irrelevant, their relative intelligence and intrinsic value reduced to a quantity approaching zero. Why shouldn’t the people in charge act like it’s already true? And why should any of us bother to do anything about it?