Posted on Jul 31, 2020 at 7:31 am
For several years now, scientists, entrepreneurs and political leaders have maintained that artificial intelligence represents the major technological innovation of the turn of the century. That it will change our lives in proportions equivalent to electricity in its time. And yet, as the world faces the most serious pandemic in its recent history, it is clear that artificial intelligence has not been of decisive help in managing the crisis. The method of containment, which has always been used to limit epidemic contagion, has emerged as the most effective response to many governments.
This paradox has been underlined by several personalities in recent weeks and it deserves our attention. It should first be qualified a little. And to start, it would be clearer to give up the expression “artificial intelligence”, source of fantasies and confusion. Better to name this algorithmic science by what it really covers, namely mathematical models. However, it would be incorrect to say that the latter were absent from the management of the crisis. In China, in particular, we know that they have helped identify infected people, refine medical diagnoses, or even equip robots in hospitals highly exposed to the virus to transport drugs or food. They are today at the heart of “tracking” technologies which do not go without asking the ethical questions that we know. But beyond China, and once the epidemic peak has passed, tools such as Bluedot, or the EpiRisk platform, which make it possible to calculate the probabilities of infection and to model the spread of the virus, could prove to be very useful in our fight against a possible resurgence of the pandemic. These tools exist. However, they did not impose themselves.
This observation, obvious during the health crisis, is not new: there is a lack of adoption of algorithmic models. On the one hand, the tools that we use on a daily basis and which we believe to be artificial intelligence – in particular via the services offered to us by the giants of the Web, from purchase suggestions from Amazon to Youtube algorithm – not really. They are most often very basic “machine learning” tools capable of exploiting data to optimize or automate processes, but which will not radically change our lives. On the other hand, start-ups hire researchers to develop very specialized models, but which rarely find their market and are adopted neither by large groups nor by public institutions, nor ultimately by consumers and users …
Some may remember that in the 1980s and 1990s, the fashion was for “expert systems”. A sort of operating system integrated into the company’s IT, they were supposed to replicate the know-how of a specialist in a particular field. A lot of money has been invested in these devices, which often have not proven their added value. And in the consulting world, it’s striking how many consultants have stuck with the good old Excel spreadsheet. Algorithmic models may be going through their “expert system” moment… from which they must get out. Because the needs are there. I have seen, for example, a number of teams working on forecasts for the attendance of a store or public facility. To do this, they are developing a small econometric forecasting model based on past attendance figures, which is very useful. But the real added value of such a tool today is to be updated in real time from observational data, the so-called “hot” data. Algorithmic models make it possible to do this. And yet this potential is seldom realized.
We need to dispel misleading ideas. That, naive, of the near future, where artificial intelligence would solve all our problems. That, pessimistic, of a niche technology for expert mathematicians. By finding its markets and responding to well-identified needs, algorithmic models can become an everyday technology, and make a difference in our lives.
Charles-Edouard Bouée is Chairman and CEO of the Roland Berger consulting firm.