In recent years, there have been an increased interest in Artificial Intelligence (AI). This is due in part to some advances for training and building neural networks, which have allowed to solve some difficult problems with greater success. This has lead to some very large investments in AI both from various governments and companies, and an increased interest in academia, and in the mainstream media for AI. This provides a lot of funding and opportunities for AI researchers and companies. However, when there is a lot of hype and expectations, there is real risk that expectations will not be met and that the bubble will burst. For AI, this has already happened twice in the 70s and 80s after expectations were not met. At these moments, AI funding greatly decreased for several years. These periods are called AI winters. Will there be another AI winter soon? I will discuss this in this blog post.
Recent breakthroughs in AI
During previous AI winters, many people have been disappointed by AI due to its inability to solve complex problems. One of the reason was that the computing power available at that time was not enough to train complex models. Since then, computers have become much more powerful. Recently, some of the greatest breakthrough in AI have been made possible due to the increase in computing power and the amount of data. For example, deep learning has emerged as a key family of machine learning models, which is basically neural networks with more hidden layers, trained on GPUs to obtain more computing power. Such models have allowed to perform some tasks very well in particular related to image classification, labelling, speech processing and language translation. For example, the ImageNet computer vision task has been solved with a very high accuracy by the AlexNet model using GPUs to train neural networks a few years ago. Then, various other improvements have been done such as for generating content using adversarial networks and using reinforcement learning for game playing (e.g. AlphaGo).
However, it can be argued that these models do some tasks better than previous models but actually do not do something really new. For example, although increasing the accuracy of document translation or image classification is useful, we are still very far from having models that do something much more complicated such as writing a text that make sense or having a real conversation with humans (not just a scripted chatbot!). It also seems clear that just increasing the computing power with more GPUs will not be enough to achieve much more complicated tasks. To achieve “General Artificial Intelligence” some key aspects such as common sense reasoning must be considered, which are lacking in current models. Thus, current deep learning models can only be seen as a small step toward a truly intelligent machine, and more research will be needed.
In fact, it can be observed that the biggest recent breakthrough are limited to some specific areas such as image and speech processing. For example, this year, I visited the International Big Data Expo 2018 in China and there was many companies displaying computer vision based products using deep learning, such that after a while, we may wonder what other problems can it solve?
Huge expectations towards AI and the need for a return on investment
There is no doubt that AI is very useful. But the huge expectations towards AI that some investors currently have are dangerous as it seems that some will not be met in the short-term. And this could lead to a disappointment, and a decrease in investments (a winter).
For example, currently one of the most popular applications of AI discussed in the medias is self-driving cars. Huge sums of money have been invested in this technology by multiple companies. However, when we see the recent car crashes and deaths caused by prototype self-driving cars in the US, it is clear that the technology is not 100 % safe. I think that the only way that safety could be achieved for such cars is if only self-driving cars would be on the road, but this will not happen anytime soon. And who would like to be in a car that is not safe? Thus, such research has the potential to lead to a huge disappointment in the short term as investors may not see the return on investment. Another example is the research by giants such as Amazon on drone delivery. It is certainly an interesting idea but in practice such technology will be met by many practical problems (what if a drone crashes and kill someone? what about if people start shooting these drones or blinding them with laser? how much weight can these drones carry? and would it even make sense economically?).
There is also a lot of hype in the media promising that AI could replace many jobs in the near future, including those of radiologists. Moreover, some researchers have even started to discuss in the media about dangers of AI, which seems very far-fetched as we are nowhere close to some general artificial intelligence. But all this discussion increases the expectations of the general public towards AI. To take advantage of the hype on AI, more and more consumer products are said to be “powered by AI” such as the cameras of cellphones. Even the Bing search engine has been updated with a chatbot, which actually does not appear to be much “smarter” than the chatbots of the 1990s (see pictures below).
For companies, what will determine whether there is a next AI winter is whether they can see a clear return on investment when hiring expensive AI specialists to develop AI products. If the return on investment is not there, then the funding will disappear and projects will be terminated.
I have recently discussed with a top researcher at the ADMA 2018 conference, which has many relationships with the industry and he told me that many companies currently don’t see the return on investment for their AI projects. That researcher made a prediction that an AI winter could occur as early as in the next 6 months. But, we never know. This is very hard to predict. It is a bit like trying to predict the stock market!
Personally, I could see some AI winter happening in the next few years. But I think that it could be perhaps a soft winter, where the interest will perhaps decrease but there will always remains some interest as AI is useful. What is your prediction ? Please leave your comments below!
Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 145 data mining algorithms.