Fatal Tesla Autopilot crash due to ‘over-reliance on automation, lack of safeguards’

The United States’ National Transport Safety Board (NTSB) has released its final findings on the fatal crash involving a Tesla Model S operating in semi-autonomous Autopilot mode.

The crash occurred in Flordia in May 2016 when Joshua Brown’s Tesla Model S collided with the underside of a tractor-trailer as the truck turned onto the non-controlled access highway.

Tesla Autopilot system is a level two semi-autonomous driving mode, which is designed to automatically steer and accelerate a car while it’s on a controlled access motorway or freeway with well defined entry and exit ramps.

According to the NTSB, Tesla’s Autopilot functioned as programmed because it was not designed to recognise a truck crossing into the car’s path from an intersecting road. As such, it did not warn the driver or engage the automated emergency braking system.

The report said the “driver’s pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations”.

The NTSB’s team concluded “while evidence revealed the Tesla driver was not attentive to the driving task, investigators could not determine from available evidence the reason for his inattention”.

It also noted “the truck driver had used marijuana before the crash, his level of impairment, if any, at the time of the crash could not be determined from the available evidence”.

Tesla did not escape blame, with the NTSB calling out the electric car maker for its ineffective methods of ensuring driver engagement.

In issuing the report, Robert L. Sumwalt III, the NTSB’s chairman, said, “System safeguards, that should have prevented the Tesla’s driver from using the car’s automation system on certain roadways, were lacking and the combined effects of human error and the lack of sufficient system safeguards resulted in a fatal collision that should not have happened”.

The electric car maker has since made changes to its Autopilot system, including reducing the interval before it begins warning the driver that their hands are off the steering wheel.

As part of its findings, the NTSB also issued a number of recommendations to various government authorities and car makers with level two self-driving features.

These NTSB called for standardised data logging formats, safeguards to ensure autonomous driving systems are used only in the manner for which they were designed, and improved monitoring of driver engagement in vehicles fitted with autonomous and semi-autonomous safety systems.

Joshua Brown’s family issued a statement through its lawyers earlier this week in anticipation of the NTSB’s report.

“We heard numerous times that the car killed our son. That is simply not the case,” the family said. “There was a small window of time when neither Joshua nor the Tesla features noticed the truck making the left-hand turn in front of the car.

“People die every day in car accidents. Change always comes with risks, and zero tolerance for deaths would totally stop innovation and improvements.”

MORE: Autonomous driving news
MORE: Tesla news, reviews, comparisons and video

Automation = Intelligence – Tesla Motors (NASDAQ:TSLA)

I was amused reading the recent back and forth between Facebook’s (NASDAQ:FB) Mark Zuckerberg and Tesla’s (NASDAQ:TSLA) Elon Musk. It is funny when two heavy hitters, both of whom arguably should have a good understanding about a given piece of technology, fall on such opposite ends of the spectrum about its capabilities. Essentially it all started when someone asked Zuckerberg what he thought of Musk and his warnings about AI being an existential threat to humanity. He responded saying it was very irresponsible, to which Elon Musk replied on Twitter that Mark has limited knowledge about the subject. Is one of them wrong and the other right? Yes, I believe so, but more about that later. This exchange also got me thinking, is this belief the source of the lot of claims Musk has made about the capabilities and development of Tesla’s Autopilot system? In this article, I will look into this a little more and show you some of the challenges faced in computer vision that illustrate the difficulties faced by a primarily vision-based fully autonomous driving system like Tesla’s Autopilot 2.0.

Six months in, still no noticeable FSD Features

It was just over six months back that Elon Musk promised customers would start seeing FSD features in Tesla’s AP2 system over the next six months at the latest. However, customers haven’t even seen their “Enhanced” Autopilot match the performance of the previous version of Autopilot let alone see any unique FSD features.

Elon Musk FSD Tweet

Now for anyone familiar with the technology, the fact that Tesla is facing a lot of challenges comes as no surprise. Unless Tesla makes significant strides beyond the current, state-of-the-art computer vision, I do not believe there is any way it can truly have a competent fully autonomous driving system under its current Autopilot 2.0 configuration. However, this is exactly what Tesla has been “promising” going as far as to hint that it would have details on a fully autonomous ride-sharing platform before the end of this year. By the way, that text hasn’t changed since last year, so I guess we should now expect these details by end of 2018?

Tesla FSD description

Source: Tesla

Automation = Intelligence

For a long time now, I had said Elon Musk and Tesla have been making knowingly misleading statements about the potential future capabilities of their system. However, as I hear his views more and more, I am starting to question if he really understands the technology and how it works. Before I go into this, let me take a step back and talk a bit about what has caused the recent revolution in the field of AI.

People have been using computers to automate a lot of work that humans have traditionally had to do manually. Indeed computers now encompass all aspects of human life. However, there were some problems that computer programmers found hard to write solutions for. One of the best examples of this is pattern recognition for example in the form of computer vision.

I can easily spot my wife in a large crowd of people, but if someone asks me how I do it, I will find it hard to describe what features I rely on to identify her. This is also one of the reasons witnesses find it hard to describe a potential suspect without the help of a sketch artist. It is generally hard to codify how the human brain interprets patterns from such visual data. This is where the idea of deep learning with large Convolutional Neural Networks comes in handy. The basic idea is that rather than trying to identify what features the system should look for, you instead just feed the system lots of data and have it learn the features that will help it identify similar objects.

The only problem here was that it took an excruciatingly long time to train a complex “Deep” network using large training datasets. With the advent of the use of GPU processing in training these systems, we are now able to use large amounts of data to train them in a reasonable amount of time. A job that would take weeks to months of running on a large CPU cluster earlier is now possible in a few hours to a few days, and this vastly expands the practical applications of these kinds of systems in pattern recognition. Over at FundamentalSpeculation, our price action based Momentum model is a result of harnessing this power of GPU processing in training.

However, at the end of the day, it is important to understand that pattern recognition is all that these systems are doing. It is not that they suddenly have a deeper understanding of what the objects in the picture represent. This is well illustrated for example by the work of Papernot, et al. in a recent paper where they used an adversarial system to trick a classifier that identifies road signs from MetaMind. This is an instance where there is an adversarial system being used specifically to target the classifier. But also consider edge cases where there is graffiti or a sticker or something similar on the sign and you realize there can be more benign instances where this can become a problem.Papernot, et al

Source: Papernot, et al. The image on the left is an original image of a stop sign. The image on the right is a perturbed image that causes the classifier to mis-classify it as a yield sign even though it looks the same to the human eye.

To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. -Papernot, et al.

The same concept also applies to achievements in the field of deep reinforcement learning. A lot of people including me celebrated when DeepMind’s AlphaGo beat the world’s best Go player earlier this year. However, even in this example, if the size of the board was any different than what the system was trained on, it would have failed miserably.

Humans have a tendency to see a machine perform a particular task and think of it as having a similar level of competency to a human capable of performing the same task. This generalization cannot be applied to the field of Machine Learning/AI. To be clear, I am not saying automation will not cause massive disruption to the economy. It has the potential to displace a lot of jobs. However, the idea that applications of “Narrow” AI that are becoming prevalent today are an existential risk and need to be regulated is silly. It is also the reason why “billions of autopilot miles” is a silly metric to gauge any potential advantage Tesla may have over its competition.

Driving a car is a very complex task. The reason the average human is reasonably good at it is because humans have “intelligence” which allows them to have a high level understanding of their environment and handle most of the edge cases fairly easily. This is fundamentally not true for AI systems we have today. When you think of these systems in terms of “automation” rather than “intelligence”, you start to realize some of the hard challenges that need to be overcome. To top it all, Tesla’s attempt to try to achieve this using a primarily vision-based system if anything puts it at a significant disadvantage to the rest of the competition.

Challenges faced by a primarily vision based Autopilot system

So what does this mean for Tesla’s Autopilot system? Firstly, a big part of my critique about its design is that it has minimal redundancy in its system. It is primarily a vision based system. However, this idea is sometimes challenged by stating that it has multiple cameras that offer redundancy. I don’t mean to pick on my fellow contributor ValueAnalyst, but this was the most recent exchange in writing I remember having about this topic in a recent article I published about Audi’s (OTCPK:AUDVF) new Level 3 system.

ValueAnalyst Comment

He is not alone in making this assumption. There was also a recent paper claiming why adversarial attacks like the one I talked about earlier do not apply to autonomous vehicles because there is redundancy in the various angles and scales at which the same object is observed and this defeats instances of mis-classification. What I find really funny is the rebuttal to this argument came from Musk’s very own OpenAI.

We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like. – OpenAI Blog

Tesla’s challenge though is even bigger. What we have been talking about so far is classification of well-defined categories of objects. Tesla’s system however not only has to identify and classify these well-defined objects but also reliably identify any other potential object that is an obstacle in its drive path without false positives. This makes it a significantly harder problem. Again, I’m not saying the systems won’t improve. However, the current state-of-the-art systems are not reliable enough to do this, and there is no reason to believe Tesla has surpassed the state of the art in this field. This makes Tesla’s claim of achieving full autonomy in this short time frame all the more ridiculous.

Impact on valuations

None of this would matter that much if everyone understood that this as a field of research with future potential and not factor in its impact into current valuations. However, this is not the case. Again, I don’t want to pick on any one person, but I’m drawn to Morgan Stanley’s Adam Jonas. He is currently neutral on the stock after it reached his price target, but let’s take a look at what got him to that price target. I remember watching one of his interviews on CNBC a few months back and having a hard time stopping myself from throwing stuff at the television to stop the insanity. What was really amazing about his performance was his confidence in talking about a topic he clearly has no understanding about. I highly recommend you watch his full interview.

We continue to believe over 100% of the upside from the current price to our $305 target can be accounted for by the value of Tesla Mobility, an on-demand and highly automated transportation service we anticipate to be launched at low volume in 2018. – Morgan Stanley

In a note he sent clients earlier that month, he tried to break down the prospects of the company by segments. He had assigned zero value to Tesla Energy because of its negative margins and he believed any prospects for this segment would be a rounding error in the grand scheme of things. He was significantly below the Street and management on the sales volume for the Model 3 while being higher than most on the average selling price of the Model 3 ($60,000). He spoke about some of his assumptions in an interview with Bloomberg at the time. He also is not all that very bullish on the sale of electric vehicles to individual customers. He gets to his valuation based on the assumption that Tesla will have a deployable autonomous driving system that can be used for ride-sharing within a Tesla Network.

“Well, we think the electric cars for private use really are … for human driving pleasure for wealthier individuals. That’s why it’s so important that in the shared model where you’re not driving 10,000 miles a year, but 50 or 100 in a fleet operation, then the economics of electrification you can get that pay back period under three years. That’s the game changer – shared.” – Adam Jonas, Morgan Stanley

So now what happens if this possibility evaporates? How much real demand would there be for a $35,000 electric vehicle if the possibility of generating revenue via participating in an autonomous ride-sharing network goes away? Forget even that, how many people are really interested in a midsize luxury sedan if the convenience of autonomous driving is taken away? Consider the volume of all Small/Midsize Luxury car sales in America last year shown below:

Small/mid size luxury car sales 2016

Source

The entire Small/Midsize Luxury segment sold a little over 800,000 vehicles across all manufactures and models in the US in 2016. Without the prospects of autonomous driving, this will be the target market for the Model 3. Further, as other manufacturers start to bring some of their more advanced driver assist and limited self-driving features to lower-range models, Tesla, relying primarily on its vision-based system, will have a hard time keeping up with the functionality offered by its competitors.

Conclusion

For a long time now, I have believed Elon Musk has been misleading TSLA shareholders and customers about the potential and development curve of Tesla’s Autopilot system. While I still believe that, I am now starting to question whether he genuinely understands the capabilities and limitations of narrow AI systems. At the end of the day, neither option is good for Tesla shareholders. If and when the Model 3 starts to ship in large volumes, those future customers will have a lot of expectations for the autonomous driving technology promised to them and they may not be as forgiving as some of the early adopters have been so far.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The Content provided in this article should be used for informational and educational purposes only and is not intended to provide tax, legal, insurance, investment, or financial advice, and the content is not intended to be a substitute for professional advice. Always seek the advice of a relevant professional with any questions about any financial, legal or other decision you are seeking to make. Any views expressed by Laxman Vembar are his own and do not necessarily reflect the view, opinions and positions of FundamentalSpeculation.IO. Finally, you should not rely solely on the information provided by the models on FundamentalSpeculation.IO in making investment decisions, but you should consider this information in the context of all information available to you.

Google Assistant now has 70 home automation partners | VentureBeat | AI

Alphabet CEO Sundar Pichai said he’s optimistic about the potential for Google Assistant, and the voice-activated Google Home speaker that it powers.

Pichai said the Assistant SDK released in April should bolster the number of devices developers can create that tap into the power of Google Assistant. He noted that there are now more than 70 home automation partners that let users control devices using Assistant on Google Home and phones — including Honeywell, Logitech, and LG.

And he said the company is investing in this area by beefing up the headcount and marketing dollars dedicated to these products.

“People are no longer only using a keyboard, mouse, and multi-touch, but are also using emerging inputs like voice and camera to ask questions and get things done in the real world,” he said during an earnings call with analysts on Monday. “We are seeing this in the way people interact with the Google Assistant, which is already now available on more than 100 million devices since launching last year, and there is more to come.”

Interestingly, Google last mentioned Assistant is available on 100 million Android devices in May. The fact the company didn’t update the figure suggests Home isn’t growing very quickly and neither is Assistant on iPhone, which arrived in May.

The market for virtual assistants has become hot over the last couple of years thanks to the success of Amazon’s Echo and Dot devices powered by Alexa. In a report released in May, eMarketer said 35.6 million Americans use a voice-activated assistant device as of April, up 128 percent from the previous year.

Alphabet and Amazon also face competition from Microsoft’s Cortana and Apple’s Siri.

Google Assistant should get a further boost as Alphabet rolls out Google Home to more geographies. Besides the U.S., it’s now available in Canada, Australia, and the U.K. And in early August, it will go on sale in France and Germany.

“We are very focused over the long-term on making sure the Assistant can actually help people get things done in the real world,” Pichai said.

The End of Instagram Automation and the Rise of the Micro-Influencer

MIH83 / Pixabay

Have you ever received a generic comment on your Instagram post from someone you’re not friends with? What about a Twitter follow from a stranger who has none of the same interests as you? Peculiar, right? Well according to a recent blog post written by a guilt-laden influencer, these engagements could be the work of a social media bot.

What is a bot you ask? A bot is a an automated tool used to grow one’s social community. This is done by commenting and following other accounts through automation. As you can imagine, this way of gaming the system is frowned upon by many in the influencer marketing community, and is something Instagram takes very seriously – so much so that it can actually get you banned from the platform.

These bots are able to access the connection that allows them to communicate with the Instagram app – this is called Instagram’s API, and doing so is very much against their Terms of Use (Basic Terms, numbers 10 and 15).

The Rise and the Fall of the Bot

You might be wondering what brought on all these bots in the first place. Well, the answer is quite simple. Ever since Instagram added an algorithm that purported to show users relevant content based on“the likelihood you’ll be interested in the content” and “your relationship with the person posting and the timeliness of the post,” influencers who were once rocking the platform found themselves having to scramble to get their posts noticed, liked, and commented upon.

In the past couple of months, Instagram has caught on to this artificial way of inflating numbers and have made it a point to crack down on these so called bots — shutting down many of the most popular Instagram automation sites.

Leading with Authenticity

Recommended for You

Webcast, July 12th: How to Create a Social Media Giveaway That Gets Thousands of Leads Without Costing a Fortune

So are all influencers a fake? Of course not – most are truly are growing their content organically and have a large, loyal list of followers who enjoy watching and listening to their latest and greatest product picks and finds, DIY projects, recipes or travel adventures.

Many are what we consider to be micro-influencers who tend to have a greater engagement rate than the likes of celebrity influencers. Overall, influencers who care about their followers, the brands they help promote and their reputation are influencers that help these same brands develop long-term relationships with their consumers, create fantastic engagement and help move the needle forward.

The real truth is that influencer marketing is booming and is projected to be a $5-10 billion market by 2020. Brands are seeing results from influencer marketing campaigns and have been able to build more authentic relationships with their consumers because of it.

So instead of getting scared away, get smart. Here’s how you can steer clear of phony influencer numbers and run an influencer marketing campaign that truly impacts your brand.

4 Things To Look For When It Comes To Influencer Marketing:

  • Go Back to the Basics – Look for influencers who consistently post about things that really resonate with their followers and aren’t constantly using #ad in every post. Are they getting comments, likes, shares?
  • Look at influence, not numbersAsk potential influencers, or influencer agencies, for metrics specifically around their user engagement and influencer reach
  • Go MicroNumber wise, a micro-influencer has between 1,000-50,000 followers on social media. Although they are still thousands (if not millions) of followers away from that of a celebrity influencer, they actually tend to drive stronger engagement on their posts.
  • Do your agency homework- Look for influencer agencies that don’t treat their influencers like a commodity. The truth is that influencer marketing is more art than science. Yes, certain pieces of software can help to identify influencers who fit a certain demographic, but there is so much missing from the equation if software is the agencies only go-to for your influencer marketing needs.

Facebook can’t solve its hate speech problem with automation

How, exactly, are people supposed to talk to each other online? For Facebook, this is as much an operational question as it is a philosophical one.

Last week, Facebook announced it has two billion users, which means roughly 27 percent of the world’s 7.5 billion people use the social media network. In a post at Facebook’s “Hard Questions” blog, the company offered a look at the internal logic behind how the company manages hate speech, the day before ProPublica broke a story about apparently hypocritical ways in which those standards are applied. Taken together, they make Facebook’s attempt to regulate speech look impossible.

Language is hard. AI trained on human language, for example, will replicate the same biases of the users, just by seeing how words are used in relation to each other. And the same word, in the same sentence, can mean different things depending on the identity of the speaker, the identity of the person to which it’s addressed, and even the manner of conversation. And that’s not even considering the multiple definitions of a given word.

“What does the statement ‘burn flags not fags’ mean?,” writes Richard Allan, Facebook’s VP of Public Policy for Europe, the Middle East, and Africa. “While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to ‘reclaim’ the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether it’s a hate speech violation, more context is needed.”

Reached for comment, a Facebook spokesperson confirmed that the Hard Questions post wasn’t representative of any new policy. Instead, it’s simply transparency into the logic of how Facebook polices speech.

“People want certain things taken down, they want the right to say things,” says Kate Klonick, a resident fellow at the Information Society Project at Yale, “they want there to be a perfect filter that takes down the things that are hate speech or racist or sexist or hugely offensive.”

One reason that Facebook may be parsing how it regulates speech in public is that, thanks to a trove of internal documents leaked to the Guardian, others are reporting on how Facebook’s internal guidance for what speech to take down and what speech to leave up.

“According to one document, migrants can be referred to as ‘filthy’ but not called filth,'” reports ProPublica, “They cannot be likened to filth or disease ‘when the comparison is in the noun form,’ the document explains.”

Klonick studies how Facebook governs its users, and while the kinds off moderation discussed in the Hard Questions post aren’t new, the transparency is. Says Klonick, “It’s not secret anymore that this happens and that your voice is being moderated, your feed is being moderated behind the scenes.”

To Klonick’s eye, by starting to disclose more of what goes on in the sausage factory, Facebook is trying to preempt criticism of how, exactly, Facebook chooses to moderate speech.

There’s nothing, though, that says Facebook has to regulate all the speech it does, beyond what’s required by the law in the countries where Facebook operates. Several examples in the Hard Questions post hinge on context: Is the person reclaiming a former slur, or is it a joke among friends or an attack by a stranger against a member of a protected group? But what happens when war suddenly changes a term from casual use to something reported as hate speech?

One example from Hard Questions is how Facebook choose to handle the word “moskal,” a Ukranian slang term for Russians, and “khokhol,” a Russian slang term for Ukrainians. When a conflict between Russia and Ukraine broke out in 2014, people in both countries started reporting the terms used by the opposing side as hate speech. In response, says Allan, “We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us.”

One common use of reporting features on websites is for people to simply report others with whom they disagree, invoking the ability of the site to censor their ideological foes. With the conversion of regular language to slurs in the midst of a war, Facebook appears to have chosen to try and calm tensions itself, by removing posts with the offending words.

“I thought that example was really interesting because he says explicitly that the decision to censor those words was unpopular on both sides,” says Jillian York, the EFF’s Director for International Freedom of Expression. “That’s very much a value judgement. It’s not saying ‘people were killing themselves because of this term, and so we’re protecting ourselves from liability;’ which is one thing that they do, one that’s a little more understandable. This is Facebook saying, ‘the people didn’t want this, but we decided it was right for them anyway.'”

And while Facebook ultimately sets policy about what to take down and what to leave up, the work of moderation is done by people, and like with Facebook’s moderation of video, this work will continue to be done by people for the foreseeable future.

“People think that it’s easy to automate this, and I think that that blogpost is why it’s so difficult right now, how far we are from automating it,” says Klonick. “Those are difficult human judgements to make, we’re years away from that. These types of examples that Richard Allen talked about in his blog post are exactly why we’re so far from automating this process.”

Again, Facebook is deciding the rules and standards for speech for over a quarter of the world’s population, something few governments in history have ever come close to or exceeded. (Ancient Persia is a rare exception). With the enormity of the task, it’s worth looking at not just how Facebook chooses to regulate speech, but why it chooses to do so.

“On scale, moderating content for 2 billion people is impossible,” says York, “so why choose to be restrictive beyond the law? Why is Facebook trying to be the world’s regulator?”