Our current AI acceleration started in 2012

Most, even inside the AI community, were blindsided by the capabilities of ChatGPT, despite the many recent advances in the field. Pinpointing the exact moment when the machine learning revolution began is challenging, as advancements have been steadily unfolding for several decades. However, if one has to choose a year, the year 2012 would be it. That year was indeed a significant milestone in the field of deep learning, a subfield of machine learning.

This significance stems from AlexNet, a deep learning model that won the 2012 ImageNet Large Scale Visual Recognition Challenge, a competition for computer vision algorithms. AlexNet significantly outperformed the previous year's winner, marking a dramatic shift in how these tasks could be approached. It demonstrated the potential of convolutional neural networks (CNNs) and deep learning more broadly, igniting increased interest and investment in the field. The following years saw many critical advancements, such as the development of recurrent neural networks (RNNs) for sequence data, the introduction of GANs (Generative Adversarial Networks) in 2014, and the advent of transformers in 2017, which revolutionized natural language processing and led to models like GPT (Generative Pre-trained Transformer), which powers ChatGPT.

Indeed, benchmarks in machine learning have regularly been exceeded in the last decade and become obsolete as the technology progresses. The ImageNet challenge is one of the few older benchmarks still active and competitive. The current state-of-the-art model on ImageNet is BASIC-L (Lion, fine-tuned), which achieves a significantly higher level of accuracy than humans.

The holy grail in AI is to achieve artificial general intelligence (AGI). In machine learning, the ability of a model to “generalise” from the data it was trained on to new data it had not seen before is key. However, the term AGI is a quantum leap more than that ability. It refers to an AI that can deal with any data and in any context in an intelligent fashion like humans can. Of course, such an AGI will have additional capabilities that humans do not have, like much larger memory and the ability to network with and effortlessly learn from what other AI’s have learned.   

Until recently, knowledgeable insiders in AI community thought the technology was underhyped. Not any more. In 2023 year Alphabet CEO Sundar Pichai felt emboldened to say that AI is a more profound technology than fire or electricity. So fast are things developing that making accurate predictions about its impact is hard. Yet we are able to see the contours coming into view, and AI will have a sweeping impact on nearly everything, from how we work to geopolitics. As a dual-use technology, not all of that will be positive, especially if we make injudicious decisions about its use. Even what turns out to be a benign change in the long term, like a new age of productivity and economic growth, could be disruptive and painful for some in the short run.

Previous
Previous

The political economy of AI

Next
Next

What should a CEO post on social media?