ChatGPT, the new sheriff in town

Thanks to ChatGPT, AI is on everyone's lips, and given the technology's possibilities, it is here to stay. Nevertheless it is important to understand how and why it is the game changer we have been preparing for over the last few years.

Bottom line

A revolution a few years in the making is finally unfolding, generating disruption as well as creating massive opportunities. With the launch of ChatGPT last November, Artificial Intelligence (AI) is back under the spotlight. This time, the technology is truly transformational for virtually everything, deserving the qualifier of being the "iPhone moment" for AI and a turning point for a sector in which we have believed since the launch of our dedicated portfolio back in 2015.

In this note we explore the technology, its limits, why it is not sentient, and its immediate and potential applications.

Executive Summary

So, AI is real after all

  • The past few months are likely to be remembered as the time when AI made a breakthrough in economies and societies.

  • Generative AI has become efficient and easy to use, creating content often indiscernible from human-generated ones.

  • Six start-ups reached unicorn status in 2022 alone, and the shift toward AI integration is happening faster than anticipated.

Understanding a phenomenon

  • Transformers, a specific architecture of Artificial Neural Networks, is the secret weapon behind generative AI thanks to its ability to grasp language context. 

  • Generative AI is an algorithm making correlations within a dataset, not a sentient algorithm aware of concepts.

  •  An AI model is only as good as the dataset it has been trained on, implying biases and limitations.

Applications are here, and it is just the beginning

  • The technological foundations of the generative AI ecosystem are already in place in hardware and software.

  • Immediate applications focus on productivity gains and fundamental research such as drug discovery.

  • Future applications are simply inconceivable yet, given the rapid developments massively increasing the technology's potential, such as multimodal AI.

So, AI is real after all

2022 will go down in history

As sensational as it may sound, historians from the future will likely look to 2022 as a turning point for humanity. Not because of a sporting event, not even for geopolitics, despite the tragic events currently unfolding in Ukraine. No, 2022 will most likely be remembered as the year when Artificial Intelligence (AI) finally made its breakthrough within economies and societies. What sort of breakthrough? Well, for starters, generative AI (i.e., the capacity to generate new data, see next section) became efficient and easy to use thanks to services such as DALL-E and the inescapable ChatGPT. AI is, therefore, now capable of creating artworks such as paintings or poems, something previously only thought to be a human prerogative. Progressively, AI-generated content is becoming indiscernible from human-generated one and is, in some cases, already on the verge of becoming indiscernible from reality.

Money is flying at AI players

The capabilities of such systems are clearly impressive, and sometimes even unreal, considering the huge gap in capabilities with existing technologies. Some respected figures have even, erroneously, suggested that such systems were actually sentient. Nothing much more was required to ignite a cycle of hype, with players addressing (or supposedly addressing) the segment of AI applications witnessing strong tailwinds. Hence, in 2022 alone, six private companies operating in the generative AI space have reached unicorn status, i.e., a valuation above $1bn. On the listed side, companies operating in the broader AI ecosystem have witnessed a strong uptick in their share price, whether they position at the infrastructure level or at the applicative one. This, of course, quickly generated comparisons with the dotcom bubble. However, this time, there is a functional technology behind it, and a technology with such capabilities that business models are already being built on top of it.

The shift will be much faster than anticipated

Looking back at the history of technology, experience has shown that, although inertia might be quite strong and act as a headwind against adopting innovation, major shifts could happen extremely rapidly. Given its potential, generative AI has the potential to fall in this category, as highlighted by the record growth in users of ChatGPT, for which it took only two months to cross the 100mn-user threshold – it took 9 months for TikTok to do so, and 30 months for Instagram. If another example was needed, every major company (but not only) has, during the last earning season, indicated  that they were, one way or another, looking to integrate AI into their strategy. Something big is clearly happening under our eyes, whether we like it or not. Hence, it is best to understand exactly why.

Understanding a phenomenon

Power wanted

A brief look at the history of AI shows that the concept is not a novelty, to say the least, since the name was coined in 1956. And although the first practical application of AI dates back to 1958 (the Mark One Perceptron), it had technological roots from the 1940s. During the following decades, AI came and went, with staggering enthusiasm being followed by periods of stagnation, as the capacities of the technologies of the day were way below expectations, giving birth to so-called “AI winters” – but at least we got iconic movies. It was not before the 2010s that capable AI applications emerged.

The reason for these sluggish developments was brutally simple: AI is computationally intensive, and semiconductors took some time before finally delivering the necessary computing power, courtesy of Moore’s law. The first major breakthrough happened during the 2000s with the rise of, paradoxically, video games. Modern video games make use of 3D and long gone are the days of Super Mario Bros. To power these games, a new breed of chips was developed, Graphic Processing Units (GPUs), and sold by the likes of NVIDIA Corp. These chips are specialized and massively parallel, unlike more traditional Central Processing Units (CPUs) sold by the likes of Intel, which are the jack-of-all-trades. This means GPUs can deal with large amounts of data in one batch instead of a series of sequences. After all, it is all but logical, considering that videogames require computing several millions of pixels for each image, and this several times per second. In short, the perfect tool to accelerate the computation of AI workloads.

 

 

On top of that, GPUs were also the perfect fit for AI workloads thanks to their inherent ability to process matrices, which are the building blocks of Artificial Neural Networks (ANNs).

Giving brains to machines

Every modern AI application is built on a neural network (ANN). Conceptually, the technology is easy to understand: it emulates the structure and some characteristics of biological brains in order to deliver advanced capabilities.

Biological brains are made of specialized cells known as neurons. These cells are connected to each other through synapses to form vast networks:  each neuron can be connected to up to several hundred other neurons, and the number of synapses within the human brain is estimated at over 1tn. One other notable particularity of neurons is that a stimulus on one neuron will impact to some extent all the other neurons it is connected to, enabling the processing of complex information, and in return enabling humans to function and develop thoughts and emotions.

ANNs emulate some characteristics of biological neurons through mathematical functions. They will also be organized in large networks to process complex information, and similarly to a biological brain, the output of an artificial neuron is impacted by the inputs received from all the neurons it is connected to, as well by the parameter used to “tune” the behavior of the neuron, also known as its weight. Considering the intricate connections of networks embedding several billions of connections, computing the result of any input boils down to a series of matrix multiplications – hence the usefulness of GPUs.

Developing potent algorithms: not a stroll in the park

Having the computing power and a theoretical algorithm did not magically yield ChatGPT-levels AIs. Several iterations of architectures, i.e., specific designs of ANNs, were developed and paved the way to current state-of-the-art generative AI.

The immediate application for ANNs was to solve a problem that had troubled generations of computer scientists: Computer Vision (CV). A specific architecture inspired by the visual cortex of biological brains was developed, called Convolutional Neural Networks (CNNs). The modus operandi of CNNs is based on proximity: similarly to someone reading with their finger and looking for words near it, CNNs will pass through an image to detect local patterns (e.g., edges), which will be combined in larger patterns (e.g., curves) to constitute elements (e.g., ears) which, once put one after another, will enable characterizing with a high degree of probability one object rather than another.

However, this has its limits for Natural Language Processing (NLP). In texts, information necessary to the understanding can be located in separate places within a sentence or even a paragraph (e.g., “The boy who likes swimming, programming, hiking and playing games is named Marcus”, where the important info, i.e., the name of the boy, comes long after the subject). Hence the development of Recurrent Neural Networks (RNNs), equipped with a memory mechanism. However, RNNs have their limits too: similarly to a snowplow advancing on a snowy road, they cannot decipher what is ahead; more importantly, their memory is limited, similarly to snowfall covering back the road once the snowplow has gone. Such systems were therefore limited to short texts.

The revolution of transformers

On top of that, identifying a cat by first its ears and last its whiskers makes no difference if done in the opposite order. The same is not true for language, as context is crucial to understand the meaning behind the words. Context is derived from syntax, and Alphabet developed in 2017 a mechanism called "attention" to understand it, initially for translation purposes.

Take the following sentence: “It’s raining cats and dogs.” Except if you are in a tornado, it is (hopefully!) not something that happens for real. Every English person will understand that the actual meaning of the sentence is closer to “It is raining heavily”. Before the development of attention, Google Translate would, however, translate this sentence literally, generating much stupefaction or laughter in the process.

Attention solves this problem by making correlations between words in different languages. In that case, words like umbrella, raincoats, rainboots, and soaked clothes would be frequently encountered, allowing to construct a representation of the situation (what humans call meaning) and to iterate from that. Combining this mechanism with the processing of whole sequences at once yielded a powerful architecture called a transformer, which is the secret sauce behind every generative AI model.

Translation is not only what you think

Indeed, translation is not only reserved for two languages. It can be applied to the same language: summarizing a text boils down to translating a long text into a shorter one while conserving information. Human language can be translated into code understandable by machines, also known as code. We can apply it to no human language at all, such as DNA, or proteins. Or to what appears to be no language at all: images.

Put another way, thanks to transformers, every problem that can be transposable in sequences of symbols can now be processed. Indeed, translation is just a matter of mapping one set of ordered sequences of symbols into another one. And thanks to transformers, we now have a general-purpose tool that is extremely gifted in translation thanks to its ability to understand context. But for that, the algorithm needs to be trained.

Training has its limits

The training of large models is conducted thanks to even larger datasets under a process called supervised learning. Schematically, it boils down to presenting the algorithm with partial data (e.g., a text filled with blanks), and to generate data to fill in the blanks, data which will then be compared with the original one. Of course, the first try is far from perfect, and the model is tuned by changing its parameters until the result is coherent. This process is obviously done automatically, as it would be totally impractical to change the weights one by one; this is a constituent of what is called Machine Learning. Humans can be added in the loop to fine-tune the model’s accuracy, something called Reinforcement Learning from Human Feedback (RLHF), where the model will maximize the frequency of the output rated as best by people – think a dog learning to master a trick to get its reward.

Training a model therefore boils down to making correlations within a dataset, which creates obvious limitations. If the data is incomplete, the algorithm will never be able to make a correlation, hence the impossibility of “imagining” a concept – no mention of cats in the data, then no presence of cats in the output, even if the data mentions a strange animal with pointy ears, whiskers, and a frequent tendency to walk on keyboards. To solve this problem, large models are trained on huge datasets – the equivalent of 500bn words for ChatGPT, extracted from several copies of the internet over a ten-year period. But gathering specific data can be complicated, like medical records to train a model specialized in healthcare. And it always costs time and money, hence the recourse to cloud computing to rent computing power rather than spend a fortune to fill a room with GPUs.

Most importantly, it should never be forgotten that these algorithms are not sentient. They are just parrots, admittedly increasingly gifted parrots thanks to all the math behind allowing for highly probabilistic correlations, but largely prone to mistakes that would seem obvious to humans. As such, it is very dangerous to overestimate them.

 

 

Applications are here, and it is just the beginning

The ecosystem’s foundations are ready

The hardware ecosystem is already established, at least in terms of technological content, as massive investments will be required to put an adequate capacity in front of what is certain to be strong demand. NVIDIA Corp is the clear leader in the space, thanks to securing the lead in terms of performance, but also by having provided the first programming interface (API) that allowed to harness the power of its chips. This API, code named "CUDA", has become the de facto standard in the Machine Learning community, and the ecosystem has crystallized around it, creating a strong barrier to entry for competitors. Competition may ultimately break through this barrier by proposing cheaper specialized chips in sufficiently high volumes for them to become commoditized, as Alphabet has started doing with its TPU family, but NVIDIA Corp’s supremacy will not disappear overnight. China will also have a major role to play, as U.S. sanctions are ultimately bound to force the country to develop an in-house ecosystem, which could prove to be a threat to existing players; the government has already mentioned the sector as being of strategic interest, meaning that there will be strong support from authorities. 

The software ecosystem is already somewhat established, but will clearly have to step up its game. Currently, it is materialized either by the cloud providers setting up dedicated platforms to access computing power, or by players specializing in a specific part of the ML ecosystem (e.g., Snowflake or Databricks for data management) according to a “best-of-breed” approach. However, this approach was established in the previous context of AI being in the development phase. For AI to become truly universal, some simplification will be required. This is where low-code/no-code approaches (similar to what C3.ai Inc proposes) will have a key role to play, by enabling entities with no deep expertise to roll out customized AI models. Ultimately, considering the importance of having access to both data and computing power, platforms à la Alphabet have a major opportunity to seize, due to the obvious synergies between their different businesses. However, in the shadow of the juggernauts, we are likely to witness the emergence of smaller but highly-specialized players leveraging hard-to-come-by datasets, such as medical records.

Applications already exist

When it comes to applications for this technology, productivity gains immediately come to mind. Generative AIs are indeed already extremely good at some office tasks, such as extracting important information from a long text, or expanding a few bullet points into a more formal email. In creative industries, if they are unlikely to replace creative artists soon, they will enable to shorten the time of creation by a large margin (e.g., the initial design of a background in an animated movie). In IT, code generation is already feasible for preparatory work, and will soon gain in accuracy for more advanced tasks. In robotics, Microsoft Corp has already developed an experimental framework allowing to program a robot through vocal orders, the model generating the code automatically, opening the door for true collaborative robots. All in all, they are a gift from heaven to companies, and more broadly economies, which are facing the double problem of a shortage of qualified workers and an aging population. Such technologies will, of course, mean the disruption of jobs such as operators in call centers or unqualified assistants, but will clearly leverage the possibilities for more qualified jobs.

Another obvious application is access to knowledge and information. Although generative AI can amplify biases present within the dataset on which it has been trained, it is the perfect research tool thanks to its ability to understand context and make helpful correlations. Microsoft Corp has already integrated ChatGPT-derived technology within its Bing engine, allowing a substantial surge in the popularity if the service. And while it would be too costly to continuously train models on new data, large language models have proven flexible enough to be able to be fed with new information through additional context (“e.g., knowing X, what can you told me about Y?”), which could be connected to the model through an API. 

Finally, we see a huge potential already unfolding in fundamental research. Alphabet’s Deepmind subsidiary has already discovered the structure of 200mn proteins, materializing a huge leap forward for science. Players such as Baidu or Schrodinger Inc already apply AI to drug discovery processes, allowing AI to accelerate the development of new drugs which would have been left aside otherwise. If advanced AI technology gives birth to concerns reminiscent of Terminator’s Skynet, the reality is closer to a guardian angel.

The future is bright

One should remember that the technology is far from having reached maturity: its poster child, ChatGPT, is still in beta, meaning that huge improvements are down the road. One of these improvements already materialized on 14 March with the launch of the GPT-4 model, incidentally highlighting the frantic pace of development currently happening in the industry, implying a rapidly moving landscape. Apart from a far greater accuracy vs. GPT-3, the main promise of GPT-4 is its multimodality: the input can be text, as GPT-3, but also images, or both at the same time, allowing processing complex tasks and massively increasing the potential impact of the technology. To highlight the possibilities of multimodality: one could take a photo of a spare part of a mechanism, feed it into the system, and ask it to identify the part and how to install it, all this while interfacing in real-time with the inventory management system.

Improvements will also come from technical optimizations. Currently, large models can only run on powerful computers due to the sheer amount of parameters, implying the impossibility of running them locally, e.g., on a phone. However, recent developments coming from the academic world have opened the door to a massive optimization of the necessary number of parameters required to reach a certain level of accuracy. Powerful generative AI may therefore run locally in a couple of years, when these optimizations will meet dedicated hardware, as was the case for Computer Vision applied to photography. This will lessen concerns regarding the confidentiality of data, which will no longer exit the device, and give a massive boost to the technology’s penetration rate.

Ultimately, what is important to remember is that AI is currently experiencing its “iPhone moment”. In 2008, when the first iPhone launched, most saw an attractive phone conveniently packing an iPod in it. Few could have foreseen the ecosystem and the economy which would emerge in a matter of just a few years. We are convinced that the same will happen with AI, especially considering the massive room for improvement and the already record adoption rate. With the cat being out of the bag, and even though ethical issues will arise, it would be suicidal to ignore this new technology. It will grant an immense competitive edge and accelerate economic development in a way rarely seen before. This is why AtonRa is closely monitoring the market to identify emerging applications and leaders. We have been investing in the field since the launch of the Artificial Intelligence & Robotics portfolio in 2015, initially mostly in the supporting infrastructure, but we are now ready to fully embrace the shift towards applications to capture best the exceptional growth that comes ahead. 

Catalysts

  • New services with new capabilities. Generative AI is entering the exponential growth phase. New models will bring major improvements in terms of capabilities, buoying the sector.

  • Integration in current/legacy services. Due to the segment being in an early stage, services mostly exist on a stand-alone basis. Their integration into "legacy" processes will increase their penetration rate. 

  • Optimization of the technology. Current models are not quite optimized and require substantial computing power. Being able to run locally and/or on smartphones would be a major selling point. 

Risks

  • Hype buildup. Generative AI is truly a revolution, but it remains in its infancy and is far from being fully reliable, which may be forgotten when looking at its potential. The consequent hype buildup, and its collapse, would be negative for the theme.

  • Restrictive regulation. Advanced generative AI is a dangerous tool when fallen into the wrong hands. This negative use may prompt excessively restrictive regulation. 

  • Data concerns. Data is critical to training AI, yet privacy concerns are growing regarding the use of data. This could also lead to biases not being rectified, implying a loss of trust in the technology. 

Companies mentioned in this article

Alphabet (GOOGL); Baidu (9888); C3.ai Inc (AI); Databricks (Not listed); Intel (INTC); Microsoft Corp (MSFT); NVIDIA Corp (NVDA); Schrodinger Inc (SDGR); Snowflake (SNOW)

Explore:



Disclaimer

This report has been produced by the organizational unit responsible for investment research (Research unit) of atonra Partners and sent to you by the company sales representatives.

As an internationally active company, atonra Partners SA may be subject to a number of provisions in drawing up and distributing its investment research documents. These regulations include the Directives on the Independence of Financial Research issued by the Swiss Bankers Association. Although atonra Partners SA believes that the information provided in this document is based on reliable sources, it cannot assume responsibility for the quality, correctness, timeliness or completeness of the information contained in this report.

The information contained in these publications is exclusively intended for a client base consisting of professionals or qualified investors. It is sent to you by way of information and cannot be divulged to a third party without the prior consent of atonra Partners. While all reasonable effort has been made to ensure that the information contained is not untrue or misleading at the time of publication, no representation is made as to its accuracy or completeness and it should not be relied upon as such.

Past performance is not indicative or a guarantee of future results. Investment losses may occur, and investors could lose some or all of their investment. Any indices cited herein are provided only as examples of general market performance and no index is directly comparable to the past or future performance of the Certificate.

It should not be assumed that the Certificate will invest in any specific securities that comprise any index, nor should it be understood to mean that there is a correlation between the Certificate’s returns and any index returns.

Any material provided to you is intended only for discussion purposes and is not intended as an offer or solicitation with respect to the purchase or sale of any security and should not be relied upon by you in evaluating the merits of investing inany securities.


Contact