Tesla Autopilot 2: Moving Targets – Tesla Motors (NASDAQ:TSLA)

0
7

I have written a few articles on Tesla (NASDAQ:TSLA) challenging some of the assumptions people make about their lead in semi-autonomous/autonomous driving systems. In this article I want to talk about how the expectations for Tesla’s Autopilot 2.0 system have changed over the past six months and refute one the most common reasons given in support of Tesla’s progress.

Wrinkles in the Silk
Ever since Tesla released its first version of the Autopilot 2.0 system, Tesla owners have been looking forward for it to reach parity with the Mobileye/Intel (NASDAQ:INTC) based Autopilot 1.0 system. I think even the most ardent supporters will admit it has taken longer than initially expected. Recently Elon Musk tweeted that the next update to the Autopilot software makes it feel “as smooth as silk.” This got a lot of supporters excited. Tesla fanclub site TMC however recently posted a video review of this update. As you can see from the reviewer’s comments in the video, he came away a little disappointed with the progress.

Parsing Elon’s words, it looks like he was talking more about how the system steers the vehicle in what it believes to be the drive path rather than detecting the drive path itself but more on that later.

Moving Targets

When Tesla announced its new version of Autopilot 2.0 and released the demo video, a lot of Tesla supporters got excited about self driving car technology being “just around the corner.” Look at the video, they said. Tesla is almost there and just needs to perfect the technology and get regulatory approval, they claimed.

With the much delayed release of the first version of the new system and performance that a group of their customers found dangerous enough to file a lawsuit, the goal post slowly started moving toward more of a driver assist system. “The primary goal for Tesla is to reach parity with Autopilot 1.0” became the new mantra and by all indications we are not there even now six months down the line.

There was another narrative that started propping up at this time claiming there is a “different codebase” for self driving and it is in fact much further ahead. We were just seeing the results of the driver assist codebase whose goal was just to be a little better than Autopilot 1.0. A lot of this I believe comes down to the idea that these deep learning systems are like monolithic “Black Box” AI minds that can learn from our actions. In one of my articles I tried to explain why this is not the case based on their own demo video.

“I also believe there has been some confusion recently on the approach Tesla is taking using Nvidia (NASDAQ: NVDA) hardware because Nvidia published a paper about using their GPUs to achieve a “behavior cloning” solution. The paper is available here. While this indeed is one approach (note this is also similar to the ALVINN project at CMU), Tesla does not seem to be using such an approach based on their video demo of Tesla Vision seen here. Notice how the system identifies and classifies individual objects in the view. This indicates a more mediated modular approach, similar to Google/Mobileye rather than a raw image input system described in the Nvidia paper.”

I want to try to explain it in a different way. Let’s say you want to build a “black box” DNN model to achieve autonomous driving. What do you think it will see in the following picture?

Remember the DNN has no notion of what individual objects are. It doesn’t know what a vehicle is, what a road is, what traffic lights are, what pedestrians look like, what buildings are, etc. It just gets raw pixel data as input and gets a “correct” answer key in terms of the user’s steering angle, brake and throttle application. Even if you believe that somehow the system has learned features where it understands that a small roughly circular group of pixels on the corner of the screen being red (traffic light) indicates we have to stop, how does it tell the difference between that and taillights of the cars in front in the scene above? The idea is just absurd. The Nvidia study and other such efforts are interesting to see what features the model learns in a purely academic sense. The approach is useless in any practical implementation.

So what does that mean for Tesla’s efforts? As I indicated in my earlier article, Tesla also is using a mediated modular approach where there is a suite of DNNs each trained to look for specific features. The drive path identification may itself consist of several models. For example you can have:

  1. A model trained to look for drive path based on lane markings on divided highways
  2. A model trained to look for drive path based on lane markings on undivided roads
  3. A model trained to look for the drive path based on edge detection such as pavement, trees etc. for smaller roads

Each model is optimized to identify the driving lane in its unique setting. The system may chose to use the drive path suggested by one or more of these models based on:

  • The model’s confidence in each case
  • A choice based on what type of road the system believes its on (based on location on a map for example) or
  • A weighted blending of the output of the individual models.

The bottom line is the task of identifying the drive path is a module shared by both the driver assist system and a fully autonomous system. The only difference in a fully autonomous system is that there are additional modules, for example to identify traffic lights (and their states) that are only needed for full autonomy.

Coming back to Elon’s tweet, what he seemed to be suggesting was that they have improvements in the procedural code that controls how the vehicle navigates in its perceived world view. The new code apparently is “smoother” in moving from the current state to the target steering angle, throttle/brake position, etc. It doesn’t indicate any significant improvement in the ML models themselves.

Conclusion

So what does this mean? This means that the performance of the current system is where Tesla is right now in the race to build semi-autonomous/fully autonomous systems. There is no “different codebase” for fully autonomous driving that performs much better. This means people will soon start to realize that Tesla is far from being the leader in bringing these semi-autonomous/fully autonomous systems to market.

Will that finally cause a re-pricing in the stock? I don’t know. Maybe it will or maybe people will start looking at the prospects of the Model Y and start bidding it up again. I have been wrong so far and I would not recommend anyone short Tesla based on a fundamental thesis. At this point in time it just does not matter. I find the roller coaster entertaining though and am more than willing to pay my small entrance fee to be on the ride.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The Content provided in this article should be used for informational and educational purposes only and is not intended to provide tax, legal, insurance, investment, or financial advice, and the content is not intended to be a substitute for professional advice. Always seek the advice of a relevant professional with any questions about any financial, legal or other decision you are seeking to make. Any views expressed by Laxman Vembar are his own and do not necessarily reflect the view, opinions and positions of FundamentalSpeculation.IO. Finally, you should not rely solely on the information provided by the models on FundamentalSpeculation.IO in making investment decisions, but you should consider this information in the context of all information available to you.

Recommended for you

Leave a Reply