Karen Hao’s Empire of AI: a liberal critique of capitalist technological progress

EMPIRE OF AI: Dreams and Nightmares in Sam Altman’s OpenAI

Karen Hao

Penguin Press, 2025

482 pages


Karen Hao’s Empire of AI is a deep dive into the rise of OpenAI, the organization that released what quickly became the world’s most popular Large Language Model (LLM) ChatGPT.[1] At center stage in Hao’s book is OpenAI CEO Sam Altman, the current darling of Silicon Valley. Her narrative revolves around efforts to oust Altman from OpenAI in November 2023 and his ability to brush off such attempts by turning what was once a non-profit research lab into a standard Silicon Valley tech corporation.

Empire of AI paints a detailed yet engaging portrait of the AI industry, one that shows how a combination of scientific talent, ideological zeal, ruthless business and an abundance of good fortune helped the company morph into the innovation poster boy it is today. It includes interviews with workers in historically oppressed countries who perform much of the critical work that enables AI to function. Its coverage of the environmental impact of AI and the efforts of major firms like Google to cover-up environmental abuses is valuable.[2]

It also features much material about Altman himself. Hao recounts many junctures in the evolution of OpenAI in which he used personal charm to seduce scientists and investors before an unexpected decision left them feeling used and betrayed. Empire of AI describes the many falling-outs Altman has had with ex-OpenAI employees and investors such as Elon Musk, OpenAI’s ex-lead scientist Ilya Sutskeker, and Dario Amodei, who would later found rival AI firm Anthropic.

Hao lays out the web of relationships that enabled Altman in the first place. Most notable was Peter Thiel, who has acted as a mentor for over a decade, imparting a good deal of his techno-libertarian ideology in the process. She argues convincingly that the elitist ideology of a handful of Silicon Valley billionaires is guiding AI development much more so than scientific or social concerns. Hao recounts how politicians in Congress are eagerly offering support in the form of massive infrastructure scaling, deregulation and public funding despite warnings from a multitude of academic researchers.

One of the book’s strengths is its clear and concise explanation of technical controversies among AI researchers and how they relate to speculation about AI’s capabilities. Hao does a good job of dismantling utopian claims that AI systems will soon outpace human intelligence in every domain.

All tech products are the result of shared scientific knowledge of all humanity as well as, in many cases, direct government funding. Invariably, after development they pass into the hands of a narrow slice of billionaires. Hao shows how AI is the culmination of this ongoing privatization process. She explains how generative AI systems like OpenAI’s ChatGPT are trained on effectively huge amounts of stolen data, whether it be proprietary computer code, e-books or academic articles.

Whereas a heist of this magnitude would lead to criminal consequences for you or me, AI companies enjoy the backing of powerful world governments. Although Hao doesn’t mention it, it cannot help but recall the tragic case of Aaron Swartz who killed himself on the eve of sentencing as he faced up to 35 years in jail. His crime was pirating academic articles from the JSTOR database - small change compared to OpenAI’s open looting.

Overall, anyone seeking an even-handed and up-to-date treatment of the industry. But it is not without certain weaknesses

The role of AI in the economy, and the amazing hype that has accompanied it at every step of the way, are not discussed. There is also little treatment of perhaps the most plainly insidious aspect of AI development: its use in a military context and for mass surveillance.

But perhaps the most significant weakness, politically speaking, is the book’s central argument that some features AI industry resemble classic 19th-century imperialism. This ascribes a role for AI and big tech that is utterly exceptional in world-historical terms. As is often the case with radical liberal criticism, the effect is to conflate symptoms with the underlying disease and thus let modern imperialism off the hook.

This may seem like nit picking of an otherwise strong book. But they are essential for a Marxist understanding of AI and its politico-economic role.

 

What is AI

First, a quick aside on AI and how it became such a source of excitement and controversy. Its definition is often slippery, but it generally refers to computer algorithms capable of simulating human cognitive abilities. Artificial Intelligence was first coined in 1956, its fortunes rising and falling in line with the ups and downs of technological development in the ensuing decades.

Older techniques from the 2000s and 2010s weren’t called AI at the time but probably should be now. For example, Google’s search algorithm or the knowledge graphs behind the logistics network of companies like Amazon have had a massive impact, but aren’t usually thought of as AI. primarily for economic and cultural reasons rather than scientific. These technologies matured before the “thawing” of the last AI winter around about 2012. It should be noted that there have been already been two “AI winters” since 1956, both setting in because AI had failed to come good on short term promises of financial returns, leading investors to jump ship. These cycles, highly disruptive in terms of scientific research, are prime examples of how damaging the anarchy of production and research can be in an ultra-financialized capitalist world system, even on its supposed home turf of innovation.

In the 2020s, the new rage is generative AI techniques (Gen AI), the tech behind ChatGPT and other chatbots. These learn from vast troves of data and then generate outputs typically in text, image or audio form. Despite the renaissance of terminology like AI and Artificial General Intelligence (AGI) – more on that later – Gen AI is just the latest link in a long chain of developments in digital technologies in the last few decades.

Gen AI is based on neural nets with architectures loosely – much looser than is often admitted – inspired by the neural architecture of the human brain. While neural methods had been popular in the 1980s, they were too computationally expensive for the hardware at the time while data was also hard to come by. While there were some genuine algorithmic advances in the 2010s that laid the framework for modern models, they have primarily been made possible by massive improvements in computer chip technology and the availability of previously unimaginable amounts of data on the Internet.

LLMs have been successful in capturing the public’s imagination with their capacity for amusing and even shocking interactions with human users. LLMs comfortably pass the Turing Test [3], once regarded as the premier test of whether a machine can match or exceed human intelligence. However, Turing’s test is based on an overly-linguistic view of human intelligence, and passing it does not show AI is effective at reasoning or interacting with the physical world.

"Stochastic Parrots at work"
IceMing & Digit / https://betterimagesofai.org / 

Tech leaders cite Gen AI’s supposed world-shattering novelty and insist that, unlike previous tech innovations and the burst bubbles they precipitated, Gen AI won’t fail to deliver on its promises. The uncritical amplification of these claims has become a modus operandi for tech journalists. Willingly or not, they all seem to have forgotten the precedents for the current hype around AI, not just within the tech world but within the narrower AI field itself. This is not to say there is no novelty with current techniques, but that their capacity to change the world beyond recognition is almost deliriously overblown, leading to unreal stock market valuations and seemingly endless speculation about AI’s impact.

Whether or not AI systems will ever exhibit “true” intelligence, there is little question on a practical level concerning their ability to perform some of the tasks we are trained to do at school or university or in the workplace. However overhyped, there is no doubt that Gen AI tech does have significant economic value. As with earlier episodes like the dotcom bubble of 2000, genuine value was involved, even if not quite as much as tech CEOs and investment bankers believed.

 

AI and the Economy

Empire of AI closes with OpenAI’s announcement of its transformation into a for-profit corporation in late December 2024 – apparently timed for the holiday period to reduce public scrutiny. Its press release declared:

We once again need to raise more capital than we’d imagined. The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st century economy. We seek to evolve in order to take the next step in our mission, helping to build the AGI economy and ensuring it benefits humanity.[4]

But what does an AGI economy mean? Elsewhere, Hao explains how Artificial General Intelligence (AGI) is a notoriously contentious concept[5]. Most of the time, it refers to an AI that is smarter or more capable than a human being. This isn’t very enlightening since we don’t have reliable metrics for human intelligence and those we do have are not suitable for measuring AI (IQ tests and exams rest of a series of assumptions about human capabilities that do not carry over to AI.)

If AGI means doing a human job at many times the speed and scale, then it could be argued we’ve had super-intelligent AI systems for a long time (The efficiency and scale of Google Search is incomparable to someone trying to find a document by hand in a database.) But if it means replacing human beings by being better at absolutely everything they do, then it is clearly nowhere near fruition. Generating text, images, and computer code may one day replace a computer programmer or an academic researcher (although human oversight will still be required), but it isn’t going to clean a sewer or cook a meal anytime soon.

That AGI is ill-defined does not stop tech CEOs from touting its imminence to the public and lawmakers alike. To prevent China from getting there first, we must deregulate tech and massively fund AGI development – or so the argument goes.

That’s the stick. The carrot is that AGI is going to radically transform the economy. The argument is that AI advancements will make labor so productive that we will no longer have enough jobs and the vast majority of people will have to find other things to find meaning in life - even if it is currently impossible to find meaning in anything other than work.

Such claims are not unprecedented. In 1930, as the world reeled from the Wall Street Crash, John Maynard Keynes wrote an essay titled “Economic Possibilities for Our Grandchildren” [6] that predicted that, within two generations, the average person would work a 15-hour-week. Even during the Industrial Revolution figures such as John Stuart Mill and David Ricardo warned of mass unemployment as a result of technological innovation. Yet the job base since the 19th century has grown many times over.

Today, the great debate in the AI commentariat is whether AI is so powerful that it will replace most if not all cognitive tasks. Unlike previous technological innovations this would primarily impact white-collar jobs which require literacy skills or specialized knowledge such as software developers.

The broad answer is, yes, AI is that powerful. The ability of LLMs to generate computer code, visuals, audio and text in seconds with small amounts of human input means it can massively increase productivity in jobs which produce content of this kind. Opponents of AI will, rightly, charge that this will lead to a major reduction in the quality and significant safety concerns.[7] However, for better or worse, this did not stop lower quality mass production superseding artisans during the Industrial Revolution.

While it is clear AI will impact some jobs, accurately measuring just how many is much more difficult. Much current hype is based on anecdotal evidence about major staff reductions. But such claims are hard to judge other than a few areas in which the impacts are exceptionally well documented (e.g., the mass replacement of customer service handlers with AI voice programs). Otherwise, it’s entirely possible that employers are using AI to justify layoffs that were already in the works and that other workers are being forced to pick up the slack. Rather than innovation, the result is little more than an old-fashioned speed-up.

AI researchers have struggled to come up with more objective measures. Reports from Anthropic [8] and Microsoft [9] base their estimates on the queries users submit to current chatbots. The Microsoft study concludes that jobs like sales representatives and customer service representatives (which employ 1.142 million and 2.859 million people respectively) are directly in the line of fire.

One of the most successful areas of AI application seems to be in software development. AI systems like Anthropic’s Claude 4.1 Opus and the recently released GPT5 are capable of coding entire applications or websites based on natural language descriptions by a user. However, recent evidence [10] suggests that, for more experienced coders, the use of such tools reduces productivity. While the coding ability of these models is impressive for users with little or no coding experience, they can quickly become a hindrance when applied to more specialized problems or are large code banks.

It should also be acknowledged that AI’s ability to replace a job doesn’t mean it will since such replacement requires knowledge and capital investment. In other words, exposure doesn’t always equal replacement. Economists are often more skeptical. For instance, Nobel Prize winning economist Daron Acemoglu predicted in 2024 that total productivity gains from AI will not exceed 0.71 percent across next 10 years [11], a forecast based, moreover, on rather generous estimates of AI’s capabilities produced by OpenAI researchers in 2023.[12] He expects the impact on jobs to be modest overall, all things being equal.

 

AI and the continued rise of the stock market

A key component of the supposed drive toward Altman’s “AGI economy” is the “scaling hypothesis,” which has led to a phenomenon known as a “hyperscaling” business strategy. The argument is that all we need to reach AGI is more data, more processing power, and hence more government-backed data centers across the globe.

This has led to speculation about “emergent” AI capabilities, the idea that once a certain quantity of processing takes place, capabilities like reasoning arise without the model being explicitly trained to develop them.

This is a case where the supposedly scientific explanation of how to develop AI dovetails with major corporate efforts to gain a monopoly positions over rivals. While early LLMs improved drastically as their data base expanded, there is little evidence that more and more data will lead to substantially better models beyond a certain point. Indeed, a plateau seems to have been reached in mid-2024. Since then, much of the progress has come from improvements in model architecture or data quality (e.g., the Chinese based DeepSeek’s V3 R1 model that shocked world markets earlier this year as well as OpenAI’s recently released GPT5 model).

Nevertheless, the unquestioned belief in this hypothesis is what is driving AI companies to build growing numbers of bigger and bigger data centers in order to create bigger and bigger models. The upshot is a “hyperscaling” arms race. This is leading to the construction of gargantuan data centers across the US, polluting major areas and drying out water supplies.[13] Hao’s own reporting on data centers in northern Chile and Uruguay shows the devastation that this bigger-and-better philosophy has caused.

The scramble for data centers is the primary driver of huge valuations of chip companies NVIDIA (now the world’s most valuable company with a market cap of $4.28 trillion) and AMD- who provide the chips used to construct the giant computers housed in these buildings. Between them, they have a near-monopoly on the graphic processing units (GPUs) needed to train and run large AI models.

Traditional big-time players like Meta, Google, Microsoft, and Apple are competing to integrate AI into their products. The aim is to leverage AI systems throughout the economy to improve productivity and reduce labor costs. This promise has become a critical prop for world capitalism in the past three years. According to DataTrek, without the so-called Magnificent Seven stocks comprising the major tech companies and NVIDIA, the S&P 500 would scarcely have grown across 2023 and 2024 (it actually increased in value by 24.2 and 23.3 percent respectively across those years).[14] According to Paul Kedrosy [15], half of the US’s economic growth in the last year came from data-center construction.

This has helped drive the stock market to new heights without a clear route to profitability. AI research labs like Anthropic and OpenAI have burnt through cash since the release of their LLMs, with many commentators speculating the models actually cost far more to run on current hardware than revenue from subscription services. Yes, these costs are likely to significantly decline, but OpenAI’s revenue is still fairly limited compared to ad revenue brought in by Google or Facebook.

This didn’t stop OpenAI from valuing itself at $500 billion in a recent initial public offering Its aspirations for a future monopoly on advanced AI products seem unlikely due to the speed with which other industry players are able to catch up to or even leap ahead. OpenAI’s revenue is estimated to be at most $5billion and it continues to make gargantuan loses on all of its service offerings. In a bombastic but informative summary of the crazy numbers involved in the AI industry Ed Zitron argues,

“What is missing is any real value generation. Again, I tell you, put aside any feelings you may have about generative AI itself, and focus on the actual economic results of this bubble. How much revenue is there? Why is there no profit? Why are there no exits? Why does big tech, which has sunk hundreds of billions of dollars into generative AI, not talk about the revenues they’re making? Why, for three years straight, have we been asked to “just wait and see,” and for how long are we going to have to wait to see it?”[16]

Microsoft, Apple and Google’s investment in data centers and AI research seems to be more about keeping existing users in platforms like Microsoft Office, iOS or the Google Suite rather than creating brand new products. Similarly, Musk’s massive investment in AI is aimed at attracting users to X, with the hope it can be transformed into an everything app analogous to WeChat in China. Whoever the eventual winner(s) are in this race, it is not clear how this will boost profits drastically - except for eventual ability to jack up prices for consumers for the same services that already exist.

One striking rationale for tech stocks continued rise comes from none other than Peter Thiel, who recently conceded that AI is viewed as a last chance saloon before markets face the reality of general economic stagnation. As he told the New York Times Interesting Things podcast, “The fact that we’re only talking about A.I. — I feel that is always an implicit acknowledgment that but for A.I., we are in almost total stagnation.” And the economic data supports this point of view as well.

Following the disappointing release of GPT5 there are some signs the AI bubble may be about to burst. Bloomberg asked “is the AI winter finally upon us” [17] and the Washington post wondered if the AI Industry was “off course”.[18] Perhaps most ominously, however, on August 18 Altman himself declared AI was in a bubble, “Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.”[19] Clouds are certainly forming.

 

Should workers be opposed to AI?

Even though precise answers about AI’s ability to automate jobs are hard to come by, we do not need precise measures, luckily, to sketch out important political questions concerning potential job displacement.

AI-fueled job displacement is first and foremost a political question. Similar to outsourcing production to low-wage economies or investment in machinery during the Industrial Revolution, using AI to increase productivity and lay off workers is a way for companies to increase their profits. As in the SAG-AFTRA strike last year, workers in many industries will face a choice between opposing AI or losing their livelihood.

In the abstract, workers should be in favor of all productivity-raising technologies. Even before AI, Keynes was right to think that his grandchild’s 15-hour work week would be technologically feasible. AI’s ability to replicate a number of cognitive tasks only makes the irrationality of having people work endless hours more obvious. With the correct utilization of AI and other recent innovations, our collective work and private lives could look very different indeed, starting, not least, with being a good deal shorter. In an ideal world, workers fighting layoffs would call less for an end of AI and more for the use and development of AI to be in their own hands.

Unfortunately, we do not live in an ideal world, and will undoubtedly be used in the near-term to assault jobs and working conditions while enhancing the ability of managers to spy on workers.

One critical factor is that even a limited introduction of AI tools will prepare the ground for further replacement. The more workers use these tools, the more data they provide on how to do their job, which is exactly what is needed to train newer models to replace an ever larger share of the tasks they perform. AI agents that take over computer desktops to perform multi-stage tasks are trained on data recorded from office workers – although so far they have proven to be less successful than initially hoped.

"Wheel of Progress"
Leo Lau & Digit / https://betterimagesofai.org

A more successful example of this has already come through driver less cars, which for years have been trained on data taken recorded from millions of human drivers. Now the driver less taxi service Waymo is growing rapidly in San Francisco, overtaking Lyft’s market share in the city in recent months. Of course, an efficient public transport system would render most taxi services superfluous without any AI magic, but that is a discussion for a different day.

In upcoming struggles, this may mean opposing AI’s introduction to a workplace completely for a time as a way to defend livelihoods. This should go along with a clear message that the issue is not AI, but corporate use of it to attack workers. Those opposing AI will undoubtedly be called Luddites and enemies of progress – a worn out cliche raised by bourgeois media outlets whenever workers oppose job cuts or wage freezes imposed in the name of technological innovation. However, they ought not to be deterred, and Marxists should not cede ground on this question.

It should also be kept in mind that whether AI really can replace a job or not will often be irrelevant to layoffs across many industries. As happened in the jobs massacre in the American auto industry in the aftermath of the 2007 financial crash, when automation was used as an excuse to cut tens of thousands of jobs and restructure the leading US auto producers from top to bottom, AI will undoubtedly be used in the context of a new crisis as a convenient justification for layoffs regardless of its impact on productivity.

Although AI is an impressive technology with many potentially uses, it is a technology massively shaped by the ideology of the ruling elite, in particular the emerging fascist-libertarian culture of Silicon Valley tech billionaires. Marx argued that technological developments reflect the interests of capital, and AI is no exception.

But the methods behind AI could be leveraged for a huge expansion of economic productivity the world over. The underpinning of modern AI innovations are statistical methods of almost unimaginable scale. At the moment, these are leveraged primarily to recommend products on Amazon, determine social media feeds, and, in the case of LLMs, predict the next word (or pixel). At the moment these algorithms are leveraged primarily for profit (or in the case of LLMs some vague imagined future profitability). Such methods – and the hardware also monopolized by big tech - could clearly be leveraged to make possible economic planning and productive efficiency such as is required to overcome global warming and poverty.

Even if AI never reaches the vague yet dizzy heights promised by tech billionaires, in the long run it has the potential to massively increase productivity across society. However, the benefits of this will only ever be reserved for a narrow elite unless the technology and its future development is put in the hands of those who develop and use it: the working class.

 

AI’s military and surveillance uses

Another significant issue that is not covered in Hao’s book is the use of AI for military and surveillance purposes. Again, it could be argued convincingly that this is outside the purview of Hao’s book. However, it is a critical aspect of the AI industry and throws light on the industry’s increasing links to the state.

Upon Altman’s reinstatement to OpenAI after a failed attempt to oust him in 2023, a new board of directors including ex-Secretary of the Treasury Larry Summers and retired US Army General Paul M. Seasoned, solidified OpenAI’s government ties. Under the Biden administration, and now increasingly under Trump, AI has been identified as a top national security priority. The big players in the industry, despite their previous association with Democratic politics, have continued to enjoy preferential treatment under Trump.

In the days after his inauguration – in which groveling tech leaders famously lined up to pay homage to the new president – Trump hosted Altman at the White House to announce the $500-billion Stargate initiative. By dollar value, this is the largest infrastructure project in US history. Over the next four years, the project aims to massively expand the US data processing capabilities by building a huge wave of data centers. Since Hao’s book has been published, OpenAI, Anthropic, Meta, and X have each signed $200-million dollar contracts with the US government for their services.

While the advantages of such applications are yet to be seen, what existing AI systems have shown is the deadly result of further distancing the combatant from the consequences of his or her actions. This has been seen more tragically in Israel’s bombardment of Palestinians in Gaza. A 2024 report by +972 magazine explains how the Israeli Defense Forces introduced a range of AI systems to generate bombing targets in Gaza.[20]

Using traditional target identification methods, Israel would have run out of targets to bomb within a few weeks of October 7. According to +972’s sources the new Lavender AI system analyzed surveillance data from Gaza and identified 37,000 new individuals as “junior Hamas officers” who then became targets for the Israeli bombs. Another AI system, grotesquely named “Where’s Daddy,” automatically inform human monitors when targets returned to their families in the evening so that the Israeli Air Force could launch a bombing raid.

It is clear that AI is a further development of the tendency of military technology to distance the combatant from the consequences of their actions. A 972+ source who has worked with Lavender told the magazine it reduces the human role in target selection to rubber stamping the system’s recommendations – offloading moral responsibility for the decision to target an official and their entire family from a human analyst to an algorithm.

Without understanding the particular architecture, it is of course difficult to know how targets are generated. However, with large AI systems based on neural nets, the factors that go into a recommendation are often a black box, even for the system designers, leading to a phenomenon known as “hallucinations.” This is where incorrect information is included in the output. Many factors completely unrelated to involvement with Hamas may well have played a role in target selection. Precision bombing has always been a fraudulent notion and introducing AI technology into target selection hardly reduces “collateral damage.”

The IDF’s AI systems are undoubtedly some of the most advanced in the world; indeed, their continuous surveillance of their own population and Palestinians in the West Bank and Gaza has provided a treasure trove of data with which to train such models. The development of equivalent AI systems in imperialist armies is undoubtedly underway the world over, in many cases inspired by “combat-tested” systems like Lavender.

Beyond generating new bombing targets, AI is seen as a critical tool by militaries for its capacity to process data and inform decision making in superhuman time. The promise is that the analysis and synthesis of battlefield data necessary for tactical decisions can be achieved in seconds, giving a massive speed advantage over conventional foes. The industry leader in developing these technologies is US-based firm Palantir, whose latest advert ran with the slogan, “battles are won before they begin.”[21]

Palantir’s stock valuation has increased 23 times since the start of 2023 was founded by Peter Thiel among others. This is no doubt due to the general hype around AI technologies and the massive injection of public money into rearmament. But it is also due to Palantir’s close relationship with the civilian and military arms of the­ Trump administration. It has signed a number of contracts with various sections of the US government in the last year including a $10-billion contract with the Pentagon on July 30. Its remit includes using generative AI and new drone technologies to develop cutting-edge military products.

Palantir is also on the leading edge of efforts to leverage AI to bolster surveillance inside imperialist countries. The firm has led the Trump administration's efforts to gut government agencies and increase executive power over the functions of the US government. One of the many controversies surrounding Elon Musk’s Department of Government Efficiency was its ability to gain access to critical government databases, including Social Security data and Treasury records, which means the data of every American has effectively been stolen. Palantir has been contracted by the Trump administration to combine these data sources into a single database on all Americans. Additionally, it is working with Immigration and Customs Enforcement “to track migrant movements in real-time.”[22]

Under the guise of efficiency, the US government and Palantir are effectively trying to produce surveillance technology that will be able to link individuals to their bank accounts and home addresses behind the backs of the courts and Congress, data that was previously separated within the government to avoid such concentrations of power.

Not only will this enable Trump to spy on and intimidate opposition within the US, but it is also a boon for Palantir as a private company. Essentially, it has been handed some of the most extensive data in the world to develop new AI systems for free. Under the auspices of AI development and efficiency the US government has thus stripped itself of some of its most valuable assets and placed them in the hands of Palantir.

"Surveillance View A"
Comuzi / https://betterimagesofai.org / Image by BBC

The company’s reach is not confined to the US. In the UK, where the company is headed by Lewis Mosley the grandson of late British fascist Oswald Mosley, Palantir won a £330-million contract to rework the National Health Service’s data infrastructure. Apparently, this system has been rejected by hospital staff across the country as unsuitable for the purposes. This has led the Labour government to bring in KPMG to somehow turn around staff attitudes at the cost of £8 million. Such are the “efficiencies” of privatization.[23]

The distinction between the “surveillance capitalism” industry, to use the term coined by Shoshana Zuboff [24], and government surveillance is diminishing. Far from regulating these industries, the state is handing the keys to data infrastructure that would cost millions on the private market

None of which poses radically new questions for Marxists. Rather, it poses old questions within a new framework. The use of AI on the battlefield threatens to massively increase the efficiency of an army’s capacity for death and destruction. Along with the massive rearmament being undertaken by imperialist powers internationally, the battlefield use of AI, in particular its use for target selection and “preventive strikes” must be strongly opposed.

The same goes for mass surveillance. The data people produce belongs to them by rights, yet has been misappropriated by private companies to maximize advertising profits. Its – often illegal – use by governments for surveillance has also been a reality for decades. AI may not fundamentally revolutionize how surveillance is carried out [25], but it does promise to enable spying at an even larger scale than was previously possible.

 

The AI Industry as an “Empire”

So far, we have discussed important aspects of the AI Industry that are excluded from Hao’s book. Now we will turn to an issue that is included. This is her central metaphor of AI as an empire. She explains:

Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires. During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify- and even entice the conquered into accepting – the invasion of sovereignty, the theft, and the subjugation. They justified their quest for power by the need to compete with other empires: In an arms race, all bets are off. All this ultimately served to entrench each empire’s power and to drive its expansion and progress.[26]

She continues:

The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract previous resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy and water required to house and run massive datacenters and supercomputers. So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies. They project tantalizing ideas of modernity and posture aggressively about the need to defeat other empires to provide cover for, and to fuel, invasions of privacy, theft, and the cataclysmic automation of large swaths of meaningful economic opportunities.[27]

Hao’s book makes clear that OpenAI and other big tech players like Google, Microsoft, Meta and X all do a great deal of horrible things reminiscent of 19th-century empires.

"Frontiers of AI"
Hanna Barakat  / https://betterimagesofai.org/

The comparison isn’t wholly false, but it is superficial and misleading. The analogy reflects a long-standing tendency among left-liberal critiques to present the crimes of capitalism as peculiar to certain industries. Reports, books and films – often with extremely valuable material - on the horrific conditions for workers, damage to consumers, and environmental destruction in this or that branch of production abound. At various points in the last two decades, rare-earth metal mining, garment production, the arms industry, and even fish farming have come under attack as exceptionally evil industries.

However, critiques limited to evil industries or individuals, as repugnant as they may be, end up promoting illusions in capitalism’s capacity for reform. The argument, sometimes explicit and other times implied, is that the solution is not a radical change in the organization of society, but a few more regulations to rein in a given industry’s excesses.

This is how Hao concludes her introduction:

But the empires of AI won’t give up their power easily. The rest of us will need to wrest back control of this technology’s future. And we’re at a pivotal moment when that’s still possible. Just as empires of old eventually fell to more inclusive forms of governance, we, too, can shape the future of AI together. Policymakers can implement strong data privacy and transparency rules and update intellectual property protections to return people’s agency over their data and work. Human rights organizations can advance international labor norms and laws to give data labelers guaranteed wage minimums and humane working conditions as well as to shore up labor rights and guarantee access to dignified economic opportunities across all sectors and industries.[28]

In the conclusion to the book, titled “How the empire falls,” Hao expands on this view, proposing a three-pronged framework for “dissolving empire”[29] by undermining the industry’s monopoly knowledge, resources, and influence. Here she suggests a number of reasonable measures including funding to support independent evaluations of AI models and alternative approaches to AI development, forcing companies to disclose training data, stronger labor protections, and board-based education about how AI systems work as an “antidote to mysticism and mirage of AI hype.”[30]

Having laid out the growing strength of the AI industry and the subservience of elected representatives in the US, Kenya, Chile and Uruguay, these suggestions strike the reader as highly limited. Critically, they do not object to AI technology itself being held as private property- which again, Hao’s book exposes as an absurdity, given the technology’s reliance on data garnered from centuries of humanity’s shared labor stored on the Internet and algorithms developed by researchers over decades.  

Hao’s suggestions leave us with a “call your local representative” type of Democratic Party protest politics. The implication is essentially that “If only our politicians knew how horrible AI is” then they would undoubtedly act against it. However, as Hao makes clear, the AI industry has been the sweetheart of Democratic and Republican administrations alike and neither party is ever going to take on an industry that is propping up the US stock market.

That politicians will not even entertain the limited measures suggested by Hao is an indication that a much more radical approach is needed. Empire of AI makes clear is that the guiding lights of all of the decisions made in the production of ChatGPT and other companies’ models was how to minimize costs and scale as quickly as possible to try to gain monopoly position. Indeed, her analyses of the ideology driving individuals like Altman and Thiel show that this race for monopoly is their explicit aim. If this requires hyper-exploiting desperate workers in Kenya or Venezuelan immigrants and destroying access to fresh water across Latin America and the US, then so be it.

Hao’s detailed work ultimately leaves us with an account of the AI industry that is much less exceptional than the author realizes. While the industry may be among the most extreme examples of corporate greed and thievery, the general tendencies are of a piece with most major 21st century industries from food production to auto.

The question of how to eliminate the rampant exploitation in the AI industry, as with many others, is ultimately one of taking on the world capitalist system itself. AI, the hardware, and the algorithms behind it ought to be public utilities, run according to the needs and interests of the great mass of the world population. This will require a much more thoroughgoing transformation of society than a few policy tweaks.

Conclusion

The main aim of this review has been to critique the limited political conclusions and oversights of an otherwise very valuable work. Despite claims that AI is somehow completely novel, the political and social questions it poses are not fundamentally different from those raised in the face of previous technological developments from electricity, the production line, and the Internet.

That is not to say each one of these technologies does not have its own particular challenges and opportunities for revolutionaries. But in the most general terms, opposing mass job displacement, the extension of the bourgeois state further and further into private life and politics, and the threat of more deadly military systems have strong historical parallels.

What is clear is that appealing to policymakers and human rights organizations is not enough. AI is a modern globalized and highly exploitative capitalist industry. The empire Hao speaks of is not OpenAI, Google, or Facebook, but the entire apparatus of the modern imperialist states, with America far ahead of its rivals, that work to advance the interests of every major corporation, investment fund and bank above all else.

At the same time modern AI systems are technological development and a step further in the concentration of technology and data into the hands of major corporations and banks. As discussed above, they are not just based on clever algorithms but on all artifacts of human culture on the Internet, which have been developed by billions of people over the course of human history. They have only been made possible due to the uncountable labor of masses of working people, and it is they who should control and benefit from these systems.

 

 

 

 



[1] Technically, models like GPT4 or o3 are LLMs and ChatGPT is an extra layer on top that allows users to have conversational chats with the model, but most people refer to ChatGPT as a single AI tool.

[2] According to a Goldman Sachs report (https://www.goldmansachs.com/insights/articles/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf) from last year the data centers needed to develop and run AI systems, as well as tech’s other digital services, will account for 8 percent by 2030 compared to 3 percent in 2022. The report argues that generative AI is likely to be the key driver of this increase in demand, accounting for 200 TWh of power demand by 2030.

[5] In the aftermath of the disappointing release of OpenAI’s flagship GPT5 model, Altman has recently walked back his efforts to associate himself and the OpenAI brand with AGI, telling CNBC  last week “it’s not a super useful term.” It was only 8 months ago that Altman proclaimed in a blog post, “We are now confident we know how to build AGI as we have traditionally understood it.”

[13] A recent investigation by More Perfect Union (https://www.youtube.com/watch?v=3VJT2JeDCyw) on Elon Musk’s efforts to build the world’s biggest data centers in Memphis gives an insight into the devastating impact this infrastructure is having on communities.

[25] For example, the arrests of leaders of the Baader-Meinhof group, a left-wing terrorist group, in 1971 occurred after a special police unit used a computer to estimate where members most likely lived in Berlin and then did deep searches in those areas.

[26] Empire of AI, p.16

[27] Ibid p.17

[28] Ibid p.19

[29] Ibid p.419

[30] Ibid p.421



Print Friendly and PDF
Share:

100th Anniversary of the October Revolution

100th Anniversary of the October Revolution
Listen to special broadcast

ΟΧΙ: Greece at the Crossroads

ΟΧΙ: Greece at the Crossroads
Essays on a turning point in Greece 2014 - 2017

Order ΟΧΙ : Greece at the Crossroads

Permanent Revolution Press

Permanent Revolution Press
Print edition of Crackpot Philosophy

Order Crackpot Philosophy

Trump and the train wreck of American liberalism

Trump and the train wreck of American liberalism
Two essays by Frank Brenner

Order PDF of 'Trump and the train wreck of American liberalism'

PDF of Brenner on Trump -$1

Contact Form

Name

Email *

Message *