The political economy of AI

Size matters in AI. Ground-breaking advances in AI models hinge on the 'scaling hypothesis'. The hypothesis posits that larger models, encompassing more data, that are more computationally expensive and have more parameters, excel in emulating intelligence across diverse tasks. Where previous advances in computing became rapidly cheaper, the reverse may now hold.

The importance of scale and its cost has implications as to who will benefit most from AI, and it will be felt from geopolitics to competitiveness to the structure of our economies. It is thought that GPT4, which is a large language model (LLM), cost more than 100 million dollars to train. At this early stage, only a handful of companies with deep pockets can compete in this race to create the most powerful AI.  Among those disadvantaged are researchers working for educational institutions, with implications for public access to knowledge.

Most states could easily afford to spend this kind of money, however, but as of yet, have not yet entered the race in all seriousness. But that is starting to change. An emerging trend to watch is the rise of AI nationalism: Because AI will supercharge economies and boost military capabilities, expect states that can to start investing for the sake of technology sovereignty and economic independence and put in place other measures, like export controls on semiconductors –– to support the local development of AI.

Market concentration

The internet turned out to be incredibly beneficial for the development of sprawling digital platforms, largely due to the phenomena of economies of scale and network effects in digital networks. The digital landscape enabled companies to engage with a worldwide audience at a significantly low incremental cost. Simultaneously, the intrinsic value of a platform for users escalates as its user base expands, while its ability to gather data does too. Now, the extensive costs and substantial data requirements associated with training top-notch AI models, plus their emergent properties, will further intensify this trend towards centralization.

The concept of emergence is particularly intriguing in the context of what are referred to as “foundation models” in artificial intelligence. These extensive models, like GPT4, are trained on vast unlabelled datasets, and exhibit emergent behaviour rather than explicitly intended behaviour. In this process, unexpected properties may surface. For instance, a model that has been trained on a vast language dataset may also produce code or music notation without any explicit instruction. While fascinating, this dynamic raises concerns about potential challenges to competitive balance within the digital economy, as the top foundational models open doors for vertical integration.

On top of this, the user-friendly, flexible, and familiar nature of language interfaces facilitates the collection of yet more data and an affinity with a service, enhancing the value of these services to users and leading to a state of lock-in. Moreover, these language interfaces are set to intensify the process of disintermediation initially brought about by the internet. Legacy media, brick-and-mortar stores, travel agencies, and shopping malls have already felt the impact of this shift. Now, it seems even former disintermediators like search engines may face a degree of disintermediation themselves.

Centralisation vs Proliferation and Geopolitics

Containing the risks associated with the most powerful AI models will depend on whether they proliferate or remain in the hands of just a few players. Control of the hardware (semiconductors) used to produce the most powerful models will help the West keep an edge over potential adversaries like China in the short term and help reduce AI risk more generally. However, trained models are already software and can very easily be copied. Who knows if this or hard-nosed business reasons is why OpenAI switched from an Open Source model to a centralised one, preferring the provision of access via application programming interfaces (API’s). Mistral, on the vanguard of Open Source AI until recently, has removed any reference to their commitment to Open Source from their website with the release of their most powerful model.

Expect this trend towards centralisation for the most powerful models to continue and even for governments to mandate it if AI becomes more powerful.

Previous
Previous

An AI that knows you and your business

Next
Next

Our current AI acceleration started in 2012