I was amused reading the recent back and forth between Facebook’s (NASDAQ:FB) Mark Zuckerberg and Tesla’s (NASDAQ:TSLA) Elon Musk. It is funny when two heavy hitters, both of whom arguably should have a good understanding about a given piece of technology, fall on such opposite ends of the spectrum about its capabilities. Essentially it all started when someone asked Zuckerberg what he thought of Musk and his warnings about AI being an existential threat to humanity. He responded saying it was very irresponsible, to which Elon Musk replied on Twitter that Mark has limited knowledge about the subject. Is one of them wrong and the other right? Yes, I believe so, but more about that later. This exchange also got me thinking, is this belief the source of the lot of claims Musk has made about the capabilities and development of Tesla’s Autopilot system? In this article, I will look into this a little more and show you some of the challenges faced in computer vision that illustrate the difficulties faced by a primarily vision-based fully autonomous driving system like Tesla’s Autopilot 2.0.
Six months in, still no noticeable FSD Features
It was just over six months back that Elon Musk promised customers would start seeing FSD features in Tesla’s AP2 system over the next six months at the latest. However, customers haven’t even seen their “Enhanced” Autopilot match the performance of the previous version of Autopilot let alone see any unique FSD features.
Now for anyone familiar with the technology, the fact that Tesla is facing a lot of challenges comes as no surprise. Unless Tesla makes significant strides beyond the current, state-of-the-art computer vision, I do not believe there is any way it can truly have a competent fully autonomous driving system under its current Autopilot 2.0 configuration. However, this is exactly what Tesla has been “promising” going as far as to hint that it would have details on a fully autonomous ride-sharing platform before the end of this year. By the way, that text hasn’t changed since last year, so I guess we should now expect these details by end of 2018?
Automation = Intelligence
For a long time now, I had said Elon Musk and Tesla have been making knowingly misleading statements about the potential future capabilities of their system. However, as I hear his views more and more, I am starting to question if he really understands the technology and how it works. Before I go into this, let me take a step back and talk a bit about what has caused the recent revolution in the field of AI.
People have been using computers to automate a lot of work that humans have traditionally had to do manually. Indeed computers now encompass all aspects of human life. However, there were some problems that computer programmers found hard to write solutions for. One of the best examples of this is pattern recognition for example in the form of computer vision.
I can easily spot my wife in a large crowd of people, but if someone asks me how I do it, I will find it hard to describe what features I rely on to identify her. This is also one of the reasons witnesses find it hard to describe a potential suspect without the help of a sketch artist. It is generally hard to codify how the human brain interprets patterns from such visual data. This is where the idea of deep learning with large Convolutional Neural Networks comes in handy. The basic idea is that rather than trying to identify what features the system should look for, you instead just feed the system lots of data and have it learn the features that will help it identify similar objects.
The only problem here was that it took an excruciatingly long time to train a complex “Deep” network using large training datasets. With the advent of the use of GPU processing in training these systems, we are now able to use large amounts of data to train them in a reasonable amount of time. A job that would take weeks to months of running on a large CPU cluster earlier is now possible in a few hours to a few days, and this vastly expands the practical applications of these kinds of systems in pattern recognition. Over at FundamentalSpeculation, our price action based Momentum model is a result of harnessing this power of GPU processing in training.
However, at the end of the day, it is important to understand that pattern recognition is all that these systems are doing. It is not that they suddenly have a deeper understanding of what the objects in the picture represent. This is well illustrated for example by the work of Papernot, et al. in a recent paper where they used an adversarial system to trick a classifier that identifies road signs from MetaMind. This is an instance where there is an adversarial system being used specifically to target the classifier. But also consider edge cases where there is graffiti or a sticker or something similar on the sign and you realize there can be more benign instances where this can become a problem.
Source: Papernot, et al. The image on the left is an original image of a stop sign. The image on the right is a perturbed image that causes the classifier to mis-classify it as a yield sign even though it looks the same to the human eye.
To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. -Papernot, et al.
The same concept also applies to achievements in the field of deep reinforcement learning. A lot of people including me celebrated when DeepMind’s AlphaGo beat the world’s best Go player earlier this year. However, even in this example, if the size of the board was any different than what the system was trained on, it would have failed miserably.
Humans have a tendency to see a machine perform a particular task and think of it as having a similar level of competency to a human capable of performing the same task. This generalization cannot be applied to the field of Machine Learning/AI. To be clear, I am not saying automation will not cause massive disruption to the economy. It has the potential to displace a lot of jobs. However, the idea that applications of “Narrow” AI that are becoming prevalent today are an existential risk and need to be regulated is silly. It is also the reason why “billions of autopilot miles” is a silly metric to gauge any potential advantage Tesla may have over its competition.
Driving a car is a very complex task. The reason the average human is reasonably good at it is because humans have “intelligence” which allows them to have a high level understanding of their environment and handle most of the edge cases fairly easily. This is fundamentally not true for AI systems we have today. When you think of these systems in terms of “automation” rather than “intelligence”, you start to realize some of the hard challenges that need to be overcome. To top it all, Tesla’s attempt to try to achieve this using a primarily vision-based system if anything puts it at a significant disadvantage to the rest of the competition.
Challenges faced by a primarily vision based Autopilot system
So what does this mean for Tesla’s Autopilot system? Firstly, a big part of my critique about its design is that it has minimal redundancy in its system. It is primarily a vision based system. However, this idea is sometimes challenged by stating that it has multiple cameras that offer redundancy. I don’t mean to pick on my fellow contributor ValueAnalyst, but this was the most recent exchange in writing I remember having about this topic in a recent article I published about Audi’s (OTCPK:AUDVF) new Level 3 system.
He is not alone in making this assumption. There was also a recent paper claiming why adversarial attacks like the one I talked about earlier do not apply to autonomous vehicles because there is redundancy in the various angles and scales at which the same object is observed and this defeats instances of mis-classification. What I find really funny is the rebuttal to this argument came from Musk’s very own OpenAI.
We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like. – OpenAI Blog
Tesla’s challenge though is even bigger. What we have been talking about so far is classification of well-defined categories of objects. Tesla’s system however not only has to identify and classify these well-defined objects but also reliably identify any other potential object that is an obstacle in its drive path without false positives. This makes it a significantly harder problem. Again, I’m not saying the systems won’t improve. However, the current state-of-the-art systems are not reliable enough to do this, and there is no reason to believe Tesla has surpassed the state of the art in this field. This makes Tesla’s claim of achieving full autonomy in this short time frame all the more ridiculous.
Impact on valuations
None of this would matter that much if everyone understood that this as a field of research with future potential and not factor in its impact into current valuations. However, this is not the case. Again, I don’t want to pick on any one person, but I’m drawn to Morgan Stanley’s Adam Jonas. He is currently neutral on the stock after it reached his price target, but let’s take a look at what got him to that price target. I remember watching one of his interviews on CNBC a few months back and having a hard time stopping myself from throwing stuff at the television to stop the insanity. What was really amazing about his performance was his confidence in talking about a topic he clearly has no understanding about. I highly recommend you watch his full interview.
We continue to believe over 100% of the upside from the current price to our $305 target can be accounted for by the value of Tesla Mobility, an on-demand and highly automated transportation service we anticipate to be launched at low volume in 2018. – Morgan Stanley
In a note he sent clients earlier that month, he tried to break down the prospects of the company by segments. He had assigned zero value to Tesla Energy because of its negative margins and he believed any prospects for this segment would be a rounding error in the grand scheme of things. He was significantly below the Street and management on the sales volume for the Model 3 while being higher than most on the average selling price of the Model 3 ($60,000). He spoke about some of his assumptions in an interview with Bloomberg at the time. He also is not all that very bullish on the sale of electric vehicles to individual customers. He gets to his valuation based on the assumption that Tesla will have a deployable autonomous driving system that can be used for ride-sharing within a Tesla Network.
“Well, we think the electric cars for private use really are … for human driving pleasure for wealthier individuals. That’s why it’s so important that in the shared model where you’re not driving 10,000 miles a year, but 50 or 100 in a fleet operation, then the economics of electrification you can get that pay back period under three years. That’s the game changer – shared.” – Adam Jonas, Morgan Stanley
So now what happens if this possibility evaporates? How much real demand would there be for a $35,000 electric vehicle if the possibility of generating revenue via participating in an autonomous ride-sharing network goes away? Forget even that, how many people are really interested in a midsize luxury sedan if the convenience of autonomous driving is taken away? Consider the volume of all Small/Midsize Luxury car sales in America last year shown below:
The entire Small/Midsize Luxury segment sold a little over 800,000 vehicles across all manufactures and models in the US in 2016. Without the prospects of autonomous driving, this will be the target market for the Model 3. Further, as other manufacturers start to bring some of their more advanced driver assist and limited self-driving features to lower-range models, Tesla, relying primarily on its vision-based system, will have a hard time keeping up with the functionality offered by its competitors.
For a long time now, I have believed Elon Musk has been misleading TSLA shareholders and customers about the potential and development curve of Tesla’s Autopilot system. While I still believe that, I am now starting to question whether he genuinely understands the capabilities and limitations of narrow AI systems. At the end of the day, neither option is good for Tesla shareholders. If and when the Model 3 starts to ship in large volumes, those future customers will have a lot of expectations for the autonomous driving technology promised to them and they may not be as forgiving as some of the early adopters have been so far.
Disclosure: I am/we are short TSLA.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Additional disclosure: The Content provided in this article should be used for informational and educational purposes only and is not intended to provide tax, legal, insurance, investment, or financial advice, and the content is not intended to be a substitute for professional advice. Always seek the advice of a relevant professional with any questions about any financial, legal or other decision you are seeking to make. Any views expressed by Laxman Vembar are his own and do not necessarily reflect the view, opinions and positions of FundamentalSpeculation.IO. Finally, you should not rely solely on the information provided by the models on FundamentalSpeculation.IO in making investment decisions, but you should consider this information in the context of all information available to you.