How Big Tech Learned to Stop Worrying and Love the Bombs
Until quite recently, many Big Tech firms opposed the militarization of AI, but that now seems like ancient history as they move to sign partnerships with arms companies. The prospect of lavish Pentagon funding for AI is too tempting to refuse.

Google, Meta, OpenAI, and Anthropic were opposed to the use of AI tools for military purposes at the start of 2024. All of these companies had changed course within a year, with some moving quickly to sign partnerships with defense contractors. (Angela Weiss / AFP via Getty Images)
In any list of “known unknowns” facing the world in 2026, artificial intelligence must be close to the top. Are the predictions of widespread AI adoption displacing hundreds of millions of workers about to be realized? Will the AI bubble burst? Will the United States or China win the race to “artificial general intelligence”?
Nick Srnicek’s book Silicon Empires doesn’t answer any of these questions directly, but it does, as the author puts it, “offer a map of the terrain in which we must fight.” By carefully charting AI’s development within its proper economic and geopolitical context, and spanning analysis of both the US and China, Srnicek’s guide to AI can help us maintain a long-term, realistic perspective of the technology’s likely trajectory.
Beyond Bubbles and Chatbots
It’s no longer a fringe idea to say there is a bubble in AI, since this has even been acknowledged by industry darlings like Jeff Bezos and Bill Gates. OpenAI CEO Sam Altman appears to already be positioning his company for a state bailout. One measurement of the AI bubble finds that it is seventeen times as large as the dot-com bubble and four times bigger than the subprime housing bubble that triggered the 2008 financial crash. A crisis is clearly in the making.
Srnicek’s sober analysis encourages us to look beyond the bubble. The fact that AI will go through painful birth pangs is neither new nor surprising: the history of technological breakthroughs is one of struggle and strife before success. Furthermore, it is highly unlikely that any crisis will take down the Big Tech firms, which are the leading players in AI development, due to the strength of their market position and their intrinsic importance to global digital infrastructure.
As Srnicek puts it:
If an AI winter takes hold, it remains unlikely to be a longstanding one. The potential of the technology remains too high, and the significance of first-mover advantages too great, for the Big Tech companies to willingly relinquish control over the direction of AI development . . . thinking in terms of bubbles narrows the vision of AI’s impact too much.
There have been renewed questions about the true potential of AI, with skeptics pointing to the slowing progress in OpenAI’s latest ChatGPT edition as a case study in the limitations of the “scaling” model that has brought generative AI to this point. For Srnicek, focusing on chatbots like ChatGPT is looking in the wrong direction. Investors are pinning their hopes on the potential for industry-specific AI “agents,” which can go well beyond giving you a response to a question and can actually carry out actions to achieve a goal — to automate workflows across the economy. “Chatbots are a poor guide to where AI is headed, and both critics and opponents should ensure they have the right target in mind,” he argues.
What is perhaps missing from Srnicek’s analysis is any exploration of the macroeconomic conditions in which AI agents could be adopted across the economy. Economist Michael Roberts has argued persuasively that a mountain of “zombie” capitalist firms kept afloat by cheap credit since 2008 are not capable of investing big in AI. The global economy would have to undergo a seismic process of “creative destruction” to forge the space in which new players willing to fully embrace AI agents can emerge. AI development is ultimately bound by the dynamics of capitalist political economy.
The AI Strategies of Big Tech
Srnicek’s 2016 book Platform Capitalism excelled in conceptualizing the breadth of digital platform business models that had begun to dominate almost all sectors, from “lean” platforms that outsource everything except the core software like Uber to “industrial” platforms like Siemens, a firm that builds digital hardware and software infrastructure in manufacturing. Similarly, a big strength of Silicon Empires is the clarity with which it explains the different strategies Big Tech is pursuing in the field of AI. The differences in approach are significant and may ultimately determine which companies win the race to dominate AI.
AI, like the steam engine and electricity, is a general purpose technology (GPT). All GPTs have been characterized by their applicability across the economy, requiring broad dissemination to develop. Typically, the value of technological breakthroughs is captured downstream, when it is turned into sector-specific products.
That is why states have historically been fundamental to R&D, since they can afford to pursue advances in GPTs without making a profit. This was the case with the internet and semiconductors. In the case of AI, Big Tech is leading the innovation, but these firms must do so while operating for-profit business models.
Trying to square this circle has led to the emergence of four strategies. First, the infrastructure strategy seeks to dominate the foundations of the AI economy, upon which other firms can build. Amazon and Microsoft are key players here, consolidating their oligopolistic positions within the cloud computing markets. For these companies, huge capital expenditures on data centers constitute an investment in the future growth of AI, as they prepare to collect cloud rents from the sector-specific products that will rely on their infrastructure to operate.
For those benefiting from the infrastructure strategy, the more widely AI is disseminated, the better. Microsoft CEO Satya Nadella has praised Chinese company DeepSeek’s chatbot, which has similar capacities to those of ChatGPT but at a fraction of the cost, as a big step toward “ubiquitous” AI. Microsoft has teamed up with an education non-rofit in the United States offering free chatbot usage to teachers “to bring the US education system onto Microsoft’s servers.”
The second strategy is to lead at the innovation frontiers of AI. OpenAI, Anthropic, and DeepSeek are all developers of cutting-edge AI models. For those pursuing a frontier strategy, staying one step ahead of the competition is essential for capturing value, as this innovation edge is the only thing that can place the company’s IP at the center of a broader development ecosystem.
All and Everything
The challenge frontier companies face is that the costs of innovation are enormous due to the amount of “compute” that is needed to power AI innovation. Meanwhile, the task of commercializing these technological breakthroughs is fraught with difficulties, and when greater focus is placed on commercial deployment, research can suffer.
The frontier firms are banking on artificial general intelligence (AGI), the holy grail of AI that reporter Karen Hao found to be a one-size-fits-all excuse for OpenAI CEO Sam Altman to dismiss all criticism of his company’s business practices. For Srnicek, we should simply understand AGI as an AI model that can be “applied across all sectors.” This would erase in one stroke the difficulties frontier AI companies have in capturing value from their innovations due to the need for sector-specific tools. Srnicek describes the potential for AGI as “immense,” but it’s important for us to remain skeptical about whether it is achievable.
The conglomerate path, which is the third strategy, represents an attempt to build sector-specific AI products across a large swath of industries to dominate in the same way as the conglomerates of old have dominated: through ownership and acquisition. Google is at the forefront of this strategy, having built as many AI foundation models as the next three largest competitors (OpenAI, Microsoft, and Meta) combined.
Google’s pursuit of AI domination requires the company to possess capabilities across the AI value chain: positioned at the cutting edge of research, with a strong footing in infrastructure, and capable of building high-quality products for various sectors. The company’s release of a series of AI health care tools in recent years, from personal health to drug development, exemplifies how this strategy is operating on the ground. In China, Huawei is at the vanguard of a group of Big Tech firms that are pursuing this “all and everything” approach to AI development.
Finally, there is the open strategy, with Meta in the United States and Alibaba and DeepSeek in China as the leading deployers. As the name suggests, the open strategy involves opening up AI models so that other developers can build upon them. In the case of Meta’s “Llama” models, this doesn’t meet the standard of open source, as there is still a significant lack of transparency in the training data and the algorithms behind the models. Even so, the weights used in the modeling are publicly available, and this does make it easier for others to access and modify the models.
What advantage does Meta derive from the open strategy? Other Big Tech firms are building high walls of intellectual property around their innovations, creating an exclusive zone of engagement with selected partners. Meta, on the other hand, is able to build a broad ecosystem around its IP that organically attracts researchers and developers toward it. The latter will make their own improvements and breakthroughs, which “can then be readily wrapped back into Meta’s internal systems.” This is a strategy that could potentially cut costs significantly for Mark Zuckerberg’s company over the long term.
The Rise of the “Tech-Industrial Complex”
In Joe Biden’s farewell speech in January 2025, he warned of the risks of a rising “tech-industrial complex” in the United States. This consciously echoed the words of Dwight Eisenhower as he left the White House in 1961, famously expressing fears about a “military-industrial complex” that could dominate US democracy.
Like the military-industrial complex, the tech-industrial complex combines powerful vested interests within the state, most importantly the Department of War, with the largest players in the private market, which today are the Big Tech firms. This is a class alliance that has only come together very recently. As Srnicek highlights, Google, Meta, OpenAI, and Anthropic were opposed to the use of AI tools for military purposes at the start of 2024. All of these companies had changed course within a year, with some moving quickly to sign partnerships with defense contractors.
The dramatic change of heart is partly down to economic necessity. AI development is expensive, and the military offers the prospect of big, long-term financing. But the geopolitical turn ultimately has deeper roots. There has been a remarkable ideological shift among tech elites in the US, away from what Srnicek calls “the Silicon Valley Consensus” toward “techno-nationalism.”
The Silicon Valley Consensus was essentially a commitment among tech elites to US-led neoliberal globalization. Politicians and tech CEOs shared a “belief in technology’s capacity to create an American-led world of borderless commerce and data.” Light-touch regulation of the tech sector meant Silicon Valley had little reason to be concerned about state meddling. Abroad, Washington helped keep foreign economies open to US technology and limited the imposition of foreign taxes and regulations on US Big Tech, while value chains across all the major tech firms stretched from China to the United States, keeping costs down.
What killed the Silicon Valley Consensus was China’s rise, opening up a new constellation of class conflicts and interests. Chinese tech giants began to become genuine competitors for their American rivals, changing the calculus for Silicon Valley. Meanwhile, since at least Donald Trump’s first presidency, the state has prioritized American technological domination over global interconnectedness. This continued through the Biden presidency, with tightening sanctions on critical technologies like semiconductors, and under Trump 2 it has flourished into what Srnicek calls a “technonationalist vision of American supremacy and unhindered innovation.”
The level of integration between Big Tech and the state is now undeniable. One $9 billion Pentagon contract for a “joint warfighting cloud capability” includes all of the big US cloud players: Amazon, Google, Microsoft, and Oracle. Ties between tech firms and the military have increased rapidly. Srnicek finds that there is no coincidence that the emergence of the tech-industrial complex has coincided with Big Tech prosecuting “a war against its workers,” many of which have sought to resist the turn to militarization.
The rise of techno-nationalism in the United States has been mirrored in China. Like in the US, Chinese Communist Party elites began by taking a hands-off approach to the emergence of large and powerful digital platforms in China, seeking to encourage the industry’s growth. However, as tensions with the United States began to heat up, Chinese president Xi Jinping has increasingly sought to direct tech companies toward state priorities. This has involved cracking down on many companies focused on facilitating consumption such as gig economy platforms Meituan and DiDi, while pushing tech firms to contribute to industrial development instead, as this is the raison d’être of Chinese Communist Party rule.
Thus, in both the US and China, we have the emergence of a potential new hegemonic order “due to an unravelling of class coalitions between state economic interests, state security interests and platform capitalist interests.” Srnicek is circumspect about the prospects for this new order to consolidate itself, highlighting the countervailing tendencies that push away from militarized techno-nationalism and the relative independence of Big Tech from the state. But the era of neoliberal globalization is clearly over, and the resulting fusion of the state and Big Tech around a nationalistic vision for AI carries with it extreme dangers for everyone.
Where Next?
In a contest between the United States and China to dominate AI, which country is likely to come out on top? Srnicek’s analysis leans toward the idea that China, despite having many weaknesses relative to the US, could well win the tech race.
The reasoning is disarmingly simple: whereas the US tech industry is focused on innovation, China’s priority is adoption, and it is likely that adoption will be decisive in the long run due to the need for a general purpose technology like AI to be diffused across the economy to reach its full potential:
In previous industrial revolutions, GPTs led to great power transitions not because one country captured monopoly profits, but rather because one country excelled in adopting a new technology and using it to dramatically change their entire economy in terms of productivity and growth. This widespread transformation of an entire economy — not a single leading sector — is what enables rising great powers to eventually overtake and surpass the incumbent hegemons.
Whatever the outcome of this contest, it’s unlikely that the technology will break decisively into two hemispheres, east and west, due to the complex interaction of value chains internationally. Instead, there will be a “layering of different geopolitical [tech] stacks,” with the pursuit of a balance between American and Chinese power likely to be a viable strategy for many countries, although a challenging one to pull off.
Unlike Srnicek’s 2015 book (coauthored with Alex Williams), Inventing the Future, which encouraged the Left to embrace automation as part of a postcapitalist vision, Silicon Empires steers clear of developing leftist policies for AI. Srnicek restricts himself to just two demands: no war between the United States and China, and Big Tech must not be allowed to dominate the development of AI.
Those are useful starting points for navigating the Left around AI, but ultimately a more ambitious agenda will be required. Any contemporary socialist program worthy of the name must be able to explain what role AI should play in the economy and society, how it should be governed, and what its relationship should be to the state and between states. Whatever happens in 2026 with the AI bubble, the political challenges posed by this powerful technology will only grow greater with time.