NTSB Finds Tesla Autopilot Partly to Blame for Fatal Crash

The U.S. National Transportation Safety Board has concluded that the crash that killed the driver of a 2015 Tesla Model S electric sedan in Florida last year was at least partly due to the limitations of “system safeguards” on the vehicle’s Autopilot semiautonomous feature.

According to Reuters, NTSB chairman Robert Sumwalt said: “Tesla allowed the driver to use the system outside of the environment for which it was designed and the system gave far too much leeway to the driver to divert his attention.”

Autopilot is designed to control the steering and speed of a vehicle driving on a highway with exit and entrance ramps, well-defined medians and clear lane markings. Since it’s not intended to have full self-driving capability, the system alerts the driver repeatedly with visual and audible warnings to pay attention and keep his or her hands on the steering wheel.

But in January, both the NTSB and the National Highway Transportation Safety Administration determined that Joshua Brown, the driver of the Model S, had set the vehicle’s cruise control at 74 mph (higher than the 65-mph limit), was not driving on a controlled-access highway and ignored the system’s warnings to remain alert.

So when a semitruck turned left across the path of Brown’s vehicle, the Autopilot system failed to respond because it’s not designed to detect crossing traffic, and the driver did not apply the brakes or otherwise take control. As a result, the Model S crashed into the side of the truck, killing Brown instantly.

At the time, NHTSA concluded that the vehicle had no defects and that Autopilot had performed as designed. And NTSB attributed the crash to driver error.

Now, however, NTSB says that Autopilot’s “operational design” was at least a contributing factor to the crash because, as configured at the time, it allowed drivers to keep their hands off the steering wheel and otherwise let their attention wander from the road for extended periods of time. In other words, drivers can override or ignore warnings from the system, putting them at risk for collisions.

NTSB has devised a number of recommendations for automakers developing partially autonomous vehicles. These include going beyond simple alerts to ensure driver engagement, blocking the use of a self-driving system beyond the limits of its design, and making sure these systems are only used on specific types of roads.

Tesla responded that it would evaluate the agency’s recommendations and “will also continue to be extremely clear with current and potential customers that Autopilot is not a fully self-driving technology and drivers need to remain attentive at all times.”

Tesla has continuously updated Autopilot since its introduction. For example, the latest version doesn’t just give warnings; it will shut off completely if the driver doesn’t take control of the wheel.

Although the Tesla Autopilot crash prompted continued NTSB scrutiny, the agency stressed that its recommendations apply to other automakers as well. It specifically mentioned Audi, BMW, Infiniti, Mercedes-Benz and Volvo, suggesting that their semiautonomous systems should also receive upgraded warnings and features that prevent drivers from using them improperly.

Fatal Tesla Autopilot crash due to ‘over-reliance on automation, lack of safeguards’

The United States’ National Transport Safety Board (NTSB) has released its final findings on the fatal crash involving a Tesla Model S operating in semi-autonomous Autopilot mode.

The crash occurred in Flordia in May 2016 when Joshua Brown’s Tesla Model S collided with the underside of a tractor-trailer as the truck turned onto the non-controlled access highway.

Tesla Autopilot system is a level two semi-autonomous driving mode, which is designed to automatically steer and accelerate a car while it’s on a controlled access motorway or freeway with well defined entry and exit ramps.

According to the NTSB, Tesla’s Autopilot functioned as programmed because it was not designed to recognise a truck crossing into the car’s path from an intersecting road. As such, it did not warn the driver or engage the automated emergency braking system.

The report said the “driver’s pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations”.

The NTSB’s team concluded “while evidence revealed the Tesla driver was not attentive to the driving task, investigators could not determine from available evidence the reason for his inattention”.

It also noted “the truck driver had used marijuana before the crash, his level of impairment, if any, at the time of the crash could not be determined from the available evidence”.

Tesla did not escape blame, with the NTSB calling out the electric car maker for its ineffective methods of ensuring driver engagement.

In issuing the report, Robert L. Sumwalt III, the NTSB’s chairman, said, “System safeguards, that should have prevented the Tesla’s driver from using the car’s automation system on certain roadways, were lacking and the combined effects of human error and the lack of sufficient system safeguards resulted in a fatal collision that should not have happened”.

The electric car maker has since made changes to its Autopilot system, including reducing the interval before it begins warning the driver that their hands are off the steering wheel.

As part of its findings, the NTSB also issued a number of recommendations to various government authorities and car makers with level two self-driving features.

These NTSB called for standardised data logging formats, safeguards to ensure autonomous driving systems are used only in the manner for which they were designed, and improved monitoring of driver engagement in vehicles fitted with autonomous and semi-autonomous safety systems.

Joshua Brown’s family issued a statement through its lawyers earlier this week in anticipation of the NTSB’s report.

“We heard numerous times that the car killed our son. That is simply not the case,” the family said. “There was a small window of time when neither Joshua nor the Tesla features noticed the truck making the left-hand turn in front of the car.

“People die every day in car accidents. Change always comes with risks, and zero tolerance for deaths would totally stop innovation and improvements.”

MORE: Autonomous driving news
MORE: Tesla news, reviews, comparisons and video

Tesla shares blame for fatal Autopilot crash according to NTSB report

The U.S. National Transportation Safety Board (NTSB) has completed its investigation into a fatal crash involving a semi-truck and a Tesla Model S utilizing automated driving systems. The reasons for the crash are complex, but the report highlights issues with self-driving vehicles that should be of concern.

The incident happened in May of 2016 in Florida. It gained wide media attention because the fatality in the wreck was the driver of a Tesla Model S who was using the car’s “Autopilot” semi-automated driving system. Blame for the wreck has been bandied about, thrown at both the commercial vehicle’s driver and the Tesla driver. Based on evidence from the crash, the NTSB’s report blames both drivers and the way Tesla’s Autopilot handled the situation.

Tesla Motors has taken a lot of flak for the name of its system and for its reliance on small print to explain that it is not, in fact, a fully autonomous driving system as the name might imply. To the company’s credit, though, it has revised much of its marketing and has now changed the software that controls the Autopilot system, which the NTSB report noted.

Yet blame for the crash itself is not terribly important. What’s more important is what can be learned from it. Namely some of the inherent dangers in autonomous vehicles, our perception of them, and how they’ll function in a world with mixed human and computer drivers on the road. The near-future of vehicle automation is going to determine what the public’s perception of self-driving vehicles is for some time.

In the NTSB’s report on the fatal Tesla crash, the blame was placed on the driver of the semi-truck, the Tesla driver, and the car’s automated systems. All three drivers (truck driver, car driver, and computer) made serious mistakes that ultimately lead to the accident.

The semi-truck driver did not yield proper right of way, causing the big rig to move in front of the Tesla unexpectedly. The driver of the Model S was not paying attention to the road at all, relying solely on the automated driving systems in the car. The Autopilot system was not designed for fully automated driving and had no way of “seeing” the oncoming crash due to limitations in its sensor setup. Nor was the Tesla adequately engaging the driver with warnings about his inattention to the road or the task of driving.

So the crash proceeded as follows: the truck driver failed to yield right of way and entered the Tesla’s path as it proceeded forward. The only indication of possible impairment to the truck driver was a trace of marijuana in the driver’s blood, but no other distractions were found in the investigation.

Meanwhile, the Model S driver was not paying attention to the road at all, though what exactly the driver was doing is undetermined. The driver’s cause of death was definitely crash-related, however, indicating that the driver did not suffer a medical emergency or other problem that could have led to the incident. The driver had a history, according to the Tesla’s recording software, of misusing the Autopilot system in this way.

The Tesla Model S’ Autopilot system had alerted the driver several times to his inattention, but had not taken further lengths or, the NTSB found, done enough to adequately prevent the driver from relinquishing all control to the car. Furthermore, the sensors and systems on board the Model S were not capable of registering the truck or its potential (and eventual) crossing of the car’s path and thus did not engage emergency braking or avoidance maneuvers. That latter part attests to the often misunderstood nature of today’s semi-automated driving systems.

From these facts, the NTSB listed several recommendations for semi-automated vehicles to meet. In its own investigation into the crash and with early input from the NTSB, Tesla found problems with the Autopilot driver inattention warning system, and has since taken steps to remedy them. Tesla Motors has also revised most of its current marketing materials to further emphasize that the Autopilot system is not a fully-automated driving system capable of completely autonomous vehicle operation and that drivers are still required to be engaged in driving even when Autopilot is activated.

The NTSB is recommending that manufacturers put restrictions in place to keep semi-automated vehicle control systems working within the confines of their design conditions to prevent drivers from misusing them. This would mean that a semi-automated vehicle whose automation is designed for use during commutes at highway speeds would need to not operate at speeds lower than that and would not function in driving situations where the reading of road signs or compliance with pedestrian crossings and the like are required.

Today, most semi-automated driving systems being used at the consumer level are based around adaptive cruise control designs. These are made to watch traffic on a freeway or highway, where multiple lanes are available, but cross-traffic and pedestrians do not exist. These systems commonly require the driver to have hands on the steering wheel at all times and are often now augmented by “driver awareness” indicators that measure how attentive the driver is. Most work by gauging the driver’s ability to keep the vehicle within its lane without assistance. Some also work by noting the driver’s head position, input to the steering wheel, and position in the seat.

The NTSB also called for vehicle event data to be captured in all semi-automated vehicles and made available in standard formats so investigators can more easily use them. They called for manufacturers to incorporate robust system safeguards to limit the automated control systems’ use, and they called for the development of applications to more effectively sense the driver’s level of engagement.

The NTSB also asked manufacturers to more closely report incidents involving semi-automated vehicle control systems. These recommendations were issued to the National Highway Traffic Safety Administration, the U.S. Department of Transportation, the Alliance of Automobile Manufacturers, the Global Automakers group, and to individual manufacturers designing and implementing autonomous vehicle technologies.

With the release of the NTSB’s summary report today, the U.S. Department of Transportation also released its own guidance on automated driving systems. These federal guidelines are given as suggestions that vehicle manufacturers are asked to voluntarily follow.

View gallery – 2 images

Tesla’s Autopilot 2.0 Is Looking Like A Flop – Tesla Motors (NASDAQ:TSLA)

Intro – Autopilot Hardware 2.0 Supposed To Enable Level 5 Self-Driving Capabilities

Tesla (TSLA) made a bold move in October 2016 when it decided to begin equipping all of its vehicles with a new suite of super computers, cameras and sensors known as Autopilot Hardware 2.0. This package was promised to enable fully self-driving capabilities pending software updates, and regulator approval.

This move came shortly after Tesla ended its partnership with Mobileye (MBLY), and doubled down on its internal autonomous driving program.

Ever since then, it has been a long and arduous process for Tesla to ramp up the feature set of its new Autopilot Hardware 2.0, to surpass version 1.0.

A slower than expected ramp has raised some concerns among investors and shareholders, about Tesla’s ability to fulfill its self-driving promises.

Now, this increasing skepticism appears to have been justified, given recent news about a newer Autopilot Hardware package.

Autopilot Hardware 2.5 Popping Up In Model 3’s?

On August 9th, Electrek reported that Tesla has began installing a new version of its Autopilot Hardware (dubbed 2.5), into the Model 3.

When asked about this new Autopilot Hardware 2.5 package, Tesla quickly downplayed its significance by responding with the following comment.

The internal name HW 2.5 is an overstatement, and instead it should be called something more like HW 2.1. This hardware set has some added computing and wiring redundancy, which very slightly improves reliability, but it does not have an additional Pascal GPU.

However, Tesla also added this disclosure, indicating there was a possibility that Autopilot Hardware 2.0 may have to be updated to fulfill its self-driving promise (emphasis mine).

However, we still expect to achieve full self-driving capability with safety more than twice as good as the average human driver without making any hardware changes to HW 2.0. If this does not turn out to be the case, which we think is highly unlikely, we will upgrade customers to the 2.5 computer at no cost.

Although management is trying to make it seem like there is only a small possibility it will occur, the fact that they disclosed it all is worrisome.

Additionally, with a slew of executive changes on the Autopilot team, there are currently a lot more questions than answers surrounding the state of Tesla’s self-driving technology.

Executive Departures

The Wall Street Journal recently reported that Tesla’s ambitious Autopilot efforts are causing a lot of internal issues.

This news comes at the heels of many executive departures thus far in 2017.

One of the first signs of trouble was the departure of Sterling Anderson, Tesla’s head of Autopilot in early 2017. This was just a couple months after Tesla’s big commitment to Autopilot Hardware 2.0, and its ability to enable fully autonomous driving.

According to Electrek, one of the main reasons behind Anderson’s departure was a disagreement with Elon Musk about the capabilities of Autopilot Hardware 2.0. Anderson was far more skeptical about the ability to achieve full autonomy without upgrading the hardware package further.

Although this departure seemed palpable, and likely the normal course of business for a fast-moving tech company, the next one was far less so.

To replace Anderson in early 2017, Tesla hired Chris Lattner away from Apple (AAPL). He is regarded as one of the best engineers in The Valley, and was responsible for creating the Swift programming language.

Then out of nowhere, Lattner decided to leave Tesla in July, citing differences with Elon Musk. Since then, it looks like he has been picked up by Google (NASDAQ:GOOGL) (GOOG), to bolster its self-driving efforts.

Seeing two different Autopilot leaders leave so quickly and suddenly is cause for concern. When combined with the recent stealth rollout of Autopilot Hardware 2.5, I’m left to make a bleak conclusion.

Conclusion – Where There’s Smoke, There’s Fire

Tesla is attempting to complete one of the world’s most ambitious technology projects by creating mass-market self-driving vehicle software. This comes with roadblocks.

Elon Musk’s plan to rapidly expand the feature set of Autopilot Hardware 2.0, and make promises about its autonomous capabilities, appears to be backfiring.

Tesla’s is now at risk of suffering a very damaging PR campaign if Autopilot Hardware 2.0 cannot achieve level 5 autonomous driving.

The pro-active move to begin equipping all Model 3’s with Autopilot Hardware 2.5, signals a lack of confidence in the previous package.

If Tesla does need to upgrade all of Autopilot Hardware 2.0 vehicles with new tech, shareholders will be paying for it.

Long-term, despite these issues, I strongly believe Tesla will persevere and continue to double-down on its Autopilot program. Tesla’s lead in self-driving technology is a critical piece of the company’s bull-thesis, and should be watched closely by bears and bulls alike.

Expect a storm of negative press if Elon and Tesla are forced to walk back on their Autopilot Hardware 2.0 claims.

Disclosure: I am/we are long TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Tesla Autopilot Effort In Disarray – Tesla Motors (NASDAQ:TSLA)

Rethink Technology business briefs for June 21, 2017.

Chris Lattner, former Apple software guru, out after six months at Tesla

Source: Twitter

It was only about six months ago that Tesla (NASDAQ:TSLA) lured Chris Lattner away from Apple (NASDAQ:AAPL). Lattner has a Ph.D. in Computer Science and had been Senior Director of the Developer Tools Department. Lattner is most famous for having developed the Swift programming language that Apple promotes heavily to developers and would-be developers.

At the time, it seemed that Lattner was being brought in to work the miracle that Tesla had promised by December but failed to deliver, Enhanced Autopilot. In January, Tesla began rolling out the new Enhanced Autopilot software to run on the new Nvidia (NASDAQ:NVDA) processing platform that was supposed to support “full self-driving capability.”

At the time, I wasn’t too concerned about the delay since I had considered the December promise to be unrealistic. Also, I assumed that there was a genuine partnership between Tesla and Nvidia. Nvidia had been working on its own autonomous vehicle and put it on display at CES. Nvidia also made available with Drive PX 2 a very comprehensive suite of machine learning and machine vision APIs to support self-driving. Surely, Nvidia would help Tesla make rapid progress towards its goals of Enhanced Autopilot and full self-driving capability.

Here we are in June, and Enhanced Autopilot still hasn’t quite reached feature parity with the original Mobileye (NYSE:MBLY) system, let alone delivered its “Enhanced” features. The latest update improves handling “smoothness,” which is to say, fixes problems in the previous update. Yet, Musk continues to promise a fully autonomous cross country drive by the end of the year.

Not long ago I had written about a teardown of the computer responsible for Enhanced Autopilot. Tesla had “customized” the Nvidia Drive PX 2 by essentially cutting it in half, reducing its complement of four processors to a single Tegra X2 ARM SOC and a GP106 Pascal architecture discrete GPU.

This was a disappointment since it meant that Tesla probably can’t get to Level 4 autonomy based on the processing power at its disposal. I’m sure this is a source of considerable frustration for everyone involved.

I’ve been in high pressure situations like this in software/hardware integration, where the hardware guys point the finger of blame at the software and the software guys blame the hardware. Often, it can be difficult to tell who or what is at fault.

It looks like Lattner was on the losing end of the blame game this time around. Too bad, since I’m not convinced he should have been.

Nvidia is part of the Rethink Technology portfolio and is a recommended buy. In Brian Bain’s mid-year DIY Investing Summit, I describe in detail the investing philosophy and decision-making behind the Rethink Technology portfolio. For a limited time, readers can hear the interview for free when the Summit goes live on June 27-28 by clicking this link.

Andrej Karpathy new head of AI and Autopilot Vision at Tesla

Andrej Karpathy earned his Ph.D. at Stanford in Machine Learning and had worked for Elon Musk’s non-profit OpenAI research firm. Tesla and the media have made some effort not to make it seem like Karpathy is Lattner’s replacement, but it’s pretty obvious he is. It’s now his task to work the miracle that Lattner couldn’t.

In fact, he is better equipped than Lattner, having more applicable education and experience in AI systems. However, his background is primarily academic, and he is now faced with the daunting task of trying to squeeze in all the functionality required for self-driving into a computer system that Nvidia has never even claimed is capable of.

I don’t doubt for a second that either Karpathy will achieve the miracle or he’ll be out himself in six months. Full self-driving capability could have made the difference for Tesla, providing both an unbeatable discriminator as well as enhanced margins, especially for the Model 3.

Travis Kalanick resigns as CEO of Uber

According to the WSJ, what precipitated Kalanick’s resignation was a letter yesterday signed by five major shareholders, demanding his resignation. Kalanick didn’t need to accede to their demands, since he still maintains an absolute voting majority of Uber (Private:UBER) shares.

But resign he did, which has come as a surprise to many. It was only about a week ago that I wrote about his leave of absence, and the report by Eric Holder detailing recommended changes in Uber’s management structure and corporate culture.

At the top of the list of recommendations was a reduced role for Kalanick, but Holder stopped well short of recommending Kalanick’s removal. I’m not convinced that Kalanick’s resignation was really necessary or even desirable. The example of a reformed Kalanick might have been more effective in reforming the company he created than anything else.

I continue to question whether it was the Holder report that really upset Uber’s minority investors so much. Uber continues to be under a court order to turn over a “due diligence” report on the Otto acquisition that brought Anthony Levandowski to Uber from Alphabet’s (NASDAQ:GOOG) (NASDAQ:GOOGL) Waymo.

This precipitated Waymo’s suit of Uber for theft of intellectual property related to autonomous vehicles. This in turn forced Uber to fire Levandowski when he wouldn’t comply with Judge William Alsup’s order to turn over 14,000 computer files Waymo alleges Levandowski took. I had speculated that the report could be the “smoking gun” that confirms Waymo’s allegations.

At this juncture, it’s not clear whether Uber has complied with the court order, but I would bet that everyone on the Uber board has seen it.

Disclosure: I am/we are long AAPL, NVDA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Tesla Autopilot can’t drive by itself

DeBord TeslaThe car isn’t actually moving.Matthew DeBord/BIThe National Transportation Safety Board has released some additional details about a fatal Tesla Model S Autopilot crash in May of 2016.

One of the findings was alarming. According to the NTSB, an agency that rarely investigates auto accidents and usually spends years delving into plane crashes: “[d]uring the last 41 minutes of [Joshua] Brown’s trip, the Model S was in Autopilot for 37.5 minutes …. Brown had his hands off the wheel for a total of 37 minutes during the time the car was in Autopilot.”

And: “The Model S displayed the visual warning “hold steering wheel” seven times during the trip. Six of those were followed by auditory warnings.”

As it turns out, I gathered some experience with Autopilot. Last year, Tesla loaned me a Model S P90D with the latest everything. We used it to take a trip to the Catskills, in upstate New York. It was quite an adventure, and the car performed well, but …

What was Autopilot really like?

It was very advanced cruise control — and to Tesla’s credit, it has become more advanced since I explored it last year. When it’s fully activated, auto-steering allows the driver to take his hands off the wheel for short periods of time, although Tesla has repeatedly stressed that drivers shouldn’t do this. And when you activate the technology, a warning presents itself on the instrument cluster.

These are the warnings that Brown evidently failed to follow. To address that issue, Tesla has updated the software so that if you ignore the warnings, Autopilot will deactivate for the remainder of your journey.

But here’s the real deal: Warnings or not, you absolutely, positively shouldn’t take you hands off of the wheel. Ever.

The technology is very good, but after using it for only about 15 minutes on the highway, it was abundantly clear to me that Autopilot is a long, long way from the magical experience of a car driving itself.

Or even steering itself.

At Business Insider, we had an early look at Autopilot, right after it was released in a software update last year. I went for it and avidly took my hands off the wheel. But we were in the vehicle for only a short period of time.

Tesla Model S autopilotDon’t do this.Benjamin Zhang/Business Insider

As we used Autopilot for a longer drive, however, it rapidly became obvious that the system does exactly what Tesla says it will do: guide the car through its lane, change lanes, and monitor surrounding traffic to avoid collisions and activate braking.

It doesn’t do anything else.

Hands on the wheel!

My opinion of Autopilot is entirely my own and not based on exhaustive or remotely scientific testing. But I now think that the accidents that have occurred when the technology was activated were the result of drivers not making a rational evaluation of Autopilot’s capabilities.

I couldn’t get comfortable keeping my hands near the wheel, or “hovering” them close by. I wanted my hands on the wheel at all times, and I think that any reasonably experienced driver should feel the same way.

Beyond that, Autopilot requires monitoring. You must pay attention to what the system is doing to determine if its sensible to leave it on. It actually demands more, not less, driver attention.

I’ve watched some of the Autopilot videos made by Brown prior to the fatal crash. It looks to me like he was trying to push the envelope on what the system could do, consistently placing himself in a position to have to get his hands back on the wheel — and probably also on the brakes, deactivating Autopilot — to retain control of the vehicle.

Tesla autopilotAutopilot in action.Tesla

I’ve been driving for over 30 years, and because of what I do for a living, I get to sample all of the latest and greatest automotive technologies. I don’t consider myself an expert driver, but I do consider myself able to make an assessment of what one should and shouldn’t do with all of the advanced cruise-control systems in the market.

But the thing is that cruise control has been with us for decades. Most drivers understand what it can and can’t do. And although Autopilot is the most significant advancement in cruise control that I’ve ever seen, it merely points at a distant moment when drivers will be able to switch off their attention and trust that their car’s brain, sensors, and radars will make better decisions.

I’m baffled

Frankly, I’m baffled that anyone would misunderstand this about Autopilot, even if Tesla has gone too far in the direction of talking up Autopilot as a preview of the future.

You’re in control of your own evaluations of new technologies. When you’re talking about a vehicle of several thousand pounds hurtling along a roadway at over 60 mph, then a simple, nearly intuitive grasp of the physics would tell you that your attention should be focused on Autopilot, with hands on the wheel, more so than with a vehicle that uses less advanced tech.

Tesla autopilotAutopilot is activated by pulling this stalk toward you twice.Tesla

If this sounds like I’m blaming owners for how they handle Autopilot, rather the holding Tesla accountable for releasing the tech in a “beta” version and using the owners as a test group, well, so be it.

You’re not a sheep, and you don’t have to be a guinea pig. 

You’re a driver in charge of your vehicle, and if you have a license and any time behind the wheel, then you ought to be able to figure out what’s safe and what isn’t — particularly if you’ve been informed by Tesla that you are assisting in perfecting a new technology.

When I reached out to Tesla for some additional insight into how it educates owners on Autopilot, the company noted that its owner’s manual contains a list of adverse circumstances that, in my reading, would compel any reasonable driver to deactivate an old-school cruise-control system, much less far more advanced yet still unproven tech.

It’s a car. You’re not going to be relieved of the responsibility for driving it anytime soon.

Tesla also stressed that Autopilot isn’t actually about the car driving itself. It’s about making driving safer.

“We are continuously and proactively enhancing our vehicles with the latest advanced safety technology,” a Tesla representative said in a statement at the time.

Tesla Model XTesla CEO Elon Musk speaks during an event to launch the new Tesla Model X crossover SUV.Justin Sullivan/Getty Images

The rep added:

“This is why we’ve taken the approach of including Autopilot hardware in all of our vehicles and continually advancing the software through robust analysis of hundreds of millions of miles of driving data, behind-the-scenes iteration, and over-the-air activation of features already proven to be safer on the net over tens of millions of miles.

“[T]oday’s Autopilot features are designed to provide a hands-on experience to give drivers more confidence behind the wheel, increase their safety on the road, and make highway driving more enjoyable. … Autopilot is by far the most advanced such system on the road, but it does not turn a Tesla into an autonomous vehicle and does not allow the driver to abdicate responsibility.”

What should Tesla do?

In fairness, Tesla has been saying this all along. It is drivers who’ve taken their experience with the technology too far. But what Tesla owners now need to do is remember that, although its vehicles are impressive, they aren’t sentient rolling androids. We are still a driving species. And we will be for a while.

This column does not necessarily reflect the opinion of Business Insider.

Get the latest Tesla stock price here.

Tesla Autopilot 2: Moving Targets – Tesla Motors (NASDAQ:TSLA)

I have written a few articles on Tesla (NASDAQ:TSLA) challenging some of the assumptions people make about their lead in semi-autonomous/autonomous driving systems. In this article I want to talk about how the expectations for Tesla’s Autopilot 2.0 system have changed over the past six months and refute one the most common reasons given in support of Tesla’s progress.

Wrinkles in the Silk
Ever since Tesla released its first version of the Autopilot 2.0 system, Tesla owners have been looking forward for it to reach parity with the Mobileye/Intel (NASDAQ:INTC) based Autopilot 1.0 system. I think even the most ardent supporters will admit it has taken longer than initially expected. Recently Elon Musk tweeted that the next update to the Autopilot software makes it feel “as smooth as silk.” This got a lot of supporters excited. Tesla fanclub site TMC however recently posted a video review of this update. As you can see from the reviewer’s comments in the video, he came away a little disappointed with the progress.

Parsing Elon’s words, it looks like he was talking more about how the system steers the vehicle in what it believes to be the drive path rather than detecting the drive path itself but more on that later.

Moving Targets

When Tesla announced its new version of Autopilot 2.0 and released the demo video, a lot of Tesla supporters got excited about self driving car technology being “just around the corner.” Look at the video, they said. Tesla is almost there and just needs to perfect the technology and get regulatory approval, they claimed.

With the much delayed release of the first version of the new system and performance that a group of their customers found dangerous enough to file a lawsuit, the goal post slowly started moving toward more of a driver assist system. “The primary goal for Tesla is to reach parity with Autopilot 1.0” became the new mantra and by all indications we are not there even now six months down the line.

There was another narrative that started propping up at this time claiming there is a “different codebase” for self driving and it is in fact much further ahead. We were just seeing the results of the driver assist codebase whose goal was just to be a little better than Autopilot 1.0. A lot of this I believe comes down to the idea that these deep learning systems are like monolithic “Black Box” AI minds that can learn from our actions. In one of my articles I tried to explain why this is not the case based on their own demo video.

“I also believe there has been some confusion recently on the approach Tesla is taking using Nvidia (NASDAQ: NVDA) hardware because Nvidia published a paper about using their GPUs to achieve a “behavior cloning” solution. The paper is available here. While this indeed is one approach (note this is also similar to the ALVINN project at CMU), Tesla does not seem to be using such an approach based on their video demo of Tesla Vision seen here. Notice how the system identifies and classifies individual objects in the view. This indicates a more mediated modular approach, similar to Google/Mobileye rather than a raw image input system described in the Nvidia paper.”

I want to try to explain it in a different way. Let’s say you want to build a “black box” DNN model to achieve autonomous driving. What do you think it will see in the following picture?

Remember the DNN has no notion of what individual objects are. It doesn’t know what a vehicle is, what a road is, what traffic lights are, what pedestrians look like, what buildings are, etc. It just gets raw pixel data as input and gets a “correct” answer key in terms of the user’s steering angle, brake and throttle application. Even if you believe that somehow the system has learned features where it understands that a small roughly circular group of pixels on the corner of the screen being red (traffic light) indicates we have to stop, how does it tell the difference between that and taillights of the cars in front in the scene above? The idea is just absurd. The Nvidia study and other such efforts are interesting to see what features the model learns in a purely academic sense. The approach is useless in any practical implementation.

So what does that mean for Tesla’s efforts? As I indicated in my earlier article, Tesla also is using a mediated modular approach where there is a suite of DNNs each trained to look for specific features. The drive path identification may itself consist of several models. For example you can have:

  1. A model trained to look for drive path based on lane markings on divided highways
  2. A model trained to look for drive path based on lane markings on undivided roads
  3. A model trained to look for the drive path based on edge detection such as pavement, trees etc. for smaller roads

Each model is optimized to identify the driving lane in its unique setting. The system may chose to use the drive path suggested by one or more of these models based on:

  • The model’s confidence in each case
  • A choice based on what type of road the system believes its on (based on location on a map for example) or
  • A weighted blending of the output of the individual models.

The bottom line is the task of identifying the drive path is a module shared by both the driver assist system and a fully autonomous system. The only difference in a fully autonomous system is that there are additional modules, for example to identify traffic lights (and their states) that are only needed for full autonomy.

Coming back to Elon’s tweet, what he seemed to be suggesting was that they have improvements in the procedural code that controls how the vehicle navigates in its perceived world view. The new code apparently is “smoother” in moving from the current state to the target steering angle, throttle/brake position, etc. It doesn’t indicate any significant improvement in the ML models themselves.


So what does this mean? This means that the performance of the current system is where Tesla is right now in the race to build semi-autonomous/fully autonomous systems. There is no “different codebase” for fully autonomous driving that performs much better. This means people will soon start to realize that Tesla is far from being the leader in bringing these semi-autonomous/fully autonomous systems to market.

Will that finally cause a re-pricing in the stock? I don’t know. Maybe it will or maybe people will start looking at the prospects of the Model Y and start bidding it up again. I have been wrong so far and I would not recommend anyone short Tesla based on a fundamental thesis. At this point in time it just does not matter. I find the roller coaster entertaining though and am more than willing to pay my small entrance fee to be on the ride.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The Content provided in this article should be used for informational and educational purposes only and is not intended to provide tax, legal, insurance, investment, or financial advice, and the content is not intended to be a substitute for professional advice. Always seek the advice of a relevant professional with any questions about any financial, legal or other decision you are seeking to make. Any views expressed by Laxman Vembar are his own and do not necessarily reflect the view, opinions and positions of FundamentalSpeculation.IO. Finally, you should not rely solely on the information provided by the models on FundamentalSpeculation.IO in making investment decisions, but you should consider this information in the context of all information available to you.

New App “Co-pilot for Tesla” Adds Much Needed Alerts To Tesla’s Autopilot


Published on June 3rd, 2017 |
by Kyle Field

June 3rd, 2017 by Kyle Field 

New smartphone app Co-pilot for Tesla* aims to leverage crowdsourced data to fill in the safety gaps in Tesla’s current autopilot solution.

Where did the idea for Tesla Co-pilot come from?

The idea for Co-pilot for Tesla app came about as a result of founder Jeff’s experiences using Autopilot in his Tesla. He loved the convenience of using Autopilot / auto-steer and was hooked on using it. A few weeks into using Autopilot, however, he had a few close calls with it around town where it responded inappropriately to temporary traffic flow changes that could have ended terribly. The near misses with Autopilot scared him to the point where he was not comfortable using it for quite a few months which kickstarted an innovative period for him.

Jeff’s experiences are not isolated, with many users in the Tesla Motors Club forums posting similar experiences with Autopilot. Consumer Reports even went so far as to demand that Tesla make sweeping changes to Autopilot in response to a fatality that occurred while a driver was using the system.

While taking time away from Autopilot, Jeff started thinking about what could be done to fix the times when Autopilot misreads, misinterprets or fails to react to a situation in such a way that it results in a dangerous situation. Autopilot asking a driver to take the wheel on very short notice in the middle of a complicated traffic situation does not ensure that the situation will end well.

Jeff saw the potential in Autopilot and had an idea about how he could take it to the next level. After months of working through a few options, he created the solution in the form of a smartphone app he called “Co-pilot for Tesla.”

What is the Co-pilot app?

Co-pilot for Tesla is a crowdsourced, GPS-powered app that allows users to flag areas where the Tesla Autopilot system has failed for them. These alerts allow users to take control of the vehicle or at least closely monitor the situation to confirm their vehicle handles the potentially risky situation appropriately. With the downside of an Autopilot fail having the potential to be a life-altering event, advance warning of the locations where the system has failed can literally be a lifesaver.

Users can opt to simply sit on the receiving end of the app and take advantage of the alerts or to actively contribute to the app and enter the instances where autopilot has failed for them. Importantly, the ability for the solution to gather vast amounts of user-generated, real-life data about Autopilot system bugs has the potential to drive improvements in the system with Tesla as well as just provide data to Autopilot users.

How does the Co-pilot app work?

The fundamentals of the app, predictably, make use of the crowdsourced user data combined with the GPS location from the smartphone, but it does much more than that to achieve a higher degree of accuracy and functionality.

The app starts with a solid underpinning of map layers that it merges with crowdsourced insights as the foundational components of the app’s intelligence. To intelligently create Autopilot events, the app utilizes the mic on the smartphone to detect when Autopilot is turned on or off by recognizing the engage/disengage chimes.

It is worth mentioning that the audio from the mic is not sent to the cloud nor is it stored. It simply listens for the Autopilot on/off tones and triggers the start and end of Autopilot events accordingly. In the event that the start or end of a trip is missed, Co-pilot intelligently determines which portions of the trip are valid, if any. 

Co-pilot kicks that up to the next level by connecting to the user’s myTesla account, which enables it to pull vehicle data directly from the Tesla Application Program Interface (API). The Tesla API is essentially a way for the app to listen to and talk back to the car using specific predetermined commands.

What can Co-pilot do today?

Today, Co-pilot provides alerts to users for known risky areas based on crowdsourced data from other users. The alerts are overlaid on a map showing the roads in the area where other users are using Autopilot to help users see which routes have fewer issues than others.

Users can also enter their own alerts to contribute to the pool, making the app function better for everyone. It works like unpaid Amazon reviews where the more alerts a single area gets, the higher the likelihood that all users will experience issues in the area. Conversely, one-off alerts can then also be identified and dismissed.

Because Co-pilot has access to data from the car through the Tesla API, it is also able to compile statistics for all trips in the vehicle. From these trips, the app automagically creates detailed charts with all the data for each including route information (distance, time, route), autosteer info, energy consumption, and battery charge. Additional graphs are displayed when swiped for battery range level, a graph showing speed, and a graph showing elevation. 

How can I get my hands on Co-pilot?

Co-pilot for Tesla is available for iOS devices today for free. Just click the App Store Download button below to pull it up. The Co-pilot team is working on an Android version for release in the near future. For more information about the app and to stay apprised of its progress, head over to the Co-pilot website or email your questions, concerns, or excitement to Jeff directly.

*This post was sponsored by Co-pilot for Tesla.

Check out our new 93-page EV report.

Join us for an upcoming Cleantech Revolution Tour conference!

Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.

Tags: Co-pilot for Tesla, Tesla, Tesla autopilot, Tesla Autosteer, Tesla Model 3, Tesla Model S, Tesla Model X

About the Author

Kyle Field I’m a tech geek passionately in search of actionable ways to reduce the negative impact my life has on the planet, save money and reduce stress. Live intentionally, make conscious decisions, love more, act responsibly, play. The more you know, the less you need. TSLA investor. Tesla referral link: http://ts.la/kyle623

Did Tesla Go Too Cheap On Enhanced Autopilot Hardware? – Tesla Motors (NASDAQ:TSLA)

Rethink Technology business briefs for May 22, 2017.

Tesla’s Nvidia car supercomputer is a little less super than we expected

Source: Nvidia

When Tesla (NASDAQ:TSLA) first announced it was providing sufficient hardware to support “full self-driving capability,” it was revealed that the processing power came from Rethink Technology favorite Nvidia (NASDAQ:NVDA).

On October 20, Nvidia published a brief announcement about it on its blog:

Tesla Motors has announced that all Tesla vehicles – Model S, Model X, and the upcoming Model 3 – will now be equipped with an on-board “supercomputer” that can provide full self-driving capability.

The computer delivers more than 40 times the processing power of the previous system. It runs a Tesla-developed neural net for vision, sonar, and radar processing.

This in-vehicle supercomputer is powered by the NVIDIA DRIVE PX 2 AI computing platform.

Drive PX 2 comes in a few different flavors, as shown above. The “Lite” PX 2 for Autocruise is furnished with a single Tegra X2 Parker system on chip (SOC), while the full-strength version for Autochauffeur contains two Parkers and two Pascal discrete GPUs.

As you can see, Nvidia expected that fully autonomous driving (probably SAE level 5) would require two full Drive PX 2 boards. How much processing power is required for level 5 no one really knows yet.

Now, thanks to the efforts of Kyle Day, we know exactly what Tesla’s on-board “supercomputer” consists of. Day has published a teardown of the computer in the Tesla Motors Club. The computer contains a single board computer designed by Tesla. Here, the Tesla logo is clearly visible:

The board contains an Nvidia GP106 discrete GPU (as found in the GTX 1060) mounted on a separate, Nvidia-designed mezzanine card:

The nature of the CPU is unknown as of this writing, since Day hadn’t yet pulled off the heatsink to reveal that part. It’s probably a Tegra X2 Parker SOC. Day has promised to update his post, and I’ll update this article if there are any surprises.

Nvidia is part of the Rethink Technology portfolio and is a recommended buy.

Tesla’s half empty glass

So here we have Tesla deciding that half of a full Drive PX 2 would be sufficient, and that with a fairly low-powered GPU. Now, Tesla’s problems delivering Enhanced Autopilot features come into focus. As of now, the company has still not delivered all the features necessary to bring Enhanced Autopilot to parity with the Mobileye (NYSE:MBLY) version.

And why did Tesla go to the trouble of reinventing the wheel by designing its own board? Just so that it could save a few hundred dollars per unit in parts? I wonder if the company asked Nvidia how much processing power would be needed to achieve level 4 autonomy?

It’s a difficult question, to be sure. It depends in large part on the sensor suite and sensor types that are being used. However, Nvidia has been demonstrating its BB8 prototype for some time and showed it at CES 2017. BB8 appears to rely mainly on video cameras, as does Tesla. Nvidia probably had the best understanding available of the hardware processing requirements and what its system could deliver.

But it appears Tesla didn’t listen to Nvidia. This decision was apparently made by Jim Keller, who was hired away from Advanced Micro Devices (NASDAQ:AMD) in January 2016 to become Tesla’s VP of Autopilot Hardware Engineering.

This decision could very well come back to haunt the company at the worst possible time – the production ramp of the Model 3. It also speaks to the issue of Tesla’s avoidance of LIDAR for its autonomous vehicle system.

LIDAR has the advantage of being able to produce precise 3D maps of the positions and velocities of objects around the vehicle. This can also be derived from image processing of video camera data, but the processing is much more onerous. Tesla seems to have shortchanged itself in both sensor technology and processing.

The upshot is that the company probably can’t get to level four autonomy (full self-driving) without a hardware upgrade. Fortunately for Tesla, Nvidia’s Drive PX 3 should be out by the end of the year, with full production in 2018. Drive PX 3, based on the Xavier SOC, will probably meet Tesla’s needs where the current system probably does not.

But retrofitting all the cars currently carrying around the cut-down Drive PX 2 hardware is an expense the company doesn’t need.

Daimler’s battery plant is not exactly a Gigafactory

Daimler AG (OTCPK:DDAIF) has announced plans to invest 500 million euros in a new battery factory for electric vehicles. The factory will be part of Daimler’s wholly owned ACCUMOTIVE subsidiary in Kamenz.

In the announcement, Daimler indicated it will have 10 purely electric vehicles in production, as well as numerous hybrid models by 2022. Although being hailed as a European “gigafactory”, it’s not expected to be producing batteries on the scale of the Tesla Gigafactory in Nevada.

Tesla plans to have 35 gigawatt-hours of production by 2018, enough to support its goal of 500,000 Model 3 vehicle production per year. The Daimler plant is expected to produce about 1 gigawatt-hour per year. That would be enough for about 17,000 60 kW-hour battery packs.

Disclosure: I am/we are long NVDA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Editor’s Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.