Voices in AI – Episode 13: A Conversation with Bryan Catanzaro

In this episode, Byron and Bryan talk about sentience, transfer learning, speech recognition, autonomous vehicles, and economic growth.




0:00

Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.

Bryan Catanzaro: Thanks. It’s great to be here.

Let’s start off with my favorite opening question. What is artificial intelligence?

It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.

I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?

Yeah, wow…I’m not a philosopher, so I actually don’t have like a…

Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?

I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.

Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?

I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.

Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”

I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.

If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?

I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.

To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.

Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?

It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.

But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?

I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.

What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?

I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.

With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.

My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.

I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.

Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?

I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.

AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.

The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.

We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?

I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.

As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.

Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.

And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.

So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?

I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.

The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.

The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.

So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.

You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?

I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.

Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?

Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.

Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?

I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.

AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?

I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.

Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?

Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.

Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?

NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.

We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.

I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.

You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?

Yes, there is.

So everything you do, you have an internal customer for already?

That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.

In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”

It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.

And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.

I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.

I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.

I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.

That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”

That’s right.

That’s like a game, right? You have points.

That’s right.

Would you ever productize anything, or is everything that you’re doing just for your own use?

We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.

But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?

Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.

I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?

First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.

But an AI excluded 190 people from that position.

It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.

Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?

I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.

It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.

Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.

We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?

I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.

I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.

All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.

It was fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.

Bryan Catanzaro: Thanks. It’s great to be here.

Let’s start off with my favorite opening question. What is artificial intelligence?

It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.

I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?

Yeah, wow…I’m not a philosopher, so I actually don’t have like a…

Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?

I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.

Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?

I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.

Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”

I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.

If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?

I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.

To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.

Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?

It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.

But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?

I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.

What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?

I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.

With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.

My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.

I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.

Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?

I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.

AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.

The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.

We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?

I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.

As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.

Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.

And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.

So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?

I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.

The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.

The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.

So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.

You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?

I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.

Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?

Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.

Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?

I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.

AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?

I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.

Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?

Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.

Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?

NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.

We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.

I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.

You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?

Yes, there is.

So everything you do, you have an internal customer for already?

That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.

In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”

It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.

And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.

I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.

I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.

I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.

That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”

That’s right.

That’s like a game, right? You have points.

That’s right.

Would you ever productize anything, or is everything that you’re doing just for your own use?

We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.

But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?

Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.

I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?

First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.

But an AI excluded 190 people from that position.

It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.

Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?

I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.

It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.

Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.

We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?

I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.

I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.

All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.

It was fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Acer Inc (2353) Is Yet to See Trading Action on Oct 13

<!–

Trending Stock News

–>

Oct 13, 2017 – By Darrin Black

Shares of Acer Inc (TPE:2353) closed at 15.8 yesterday. Acer Inc at present has a total float of 3.03B shares and on common sees shares trade hands each and every day. The inventory now has a 52-week small of 12.65 and higher of 17.1.

TWSE: Giving Taiwan A Global Identity

Taiwan is 1 of the quickest-escalating nations around the world in Asia in terms of financial prospects. Its trade and commerce sector plays an important function in the constant accomplishment of the broader Asian trade and commerce atmosphere. Lively trades is 1 of the key topic of Acer Inc fascination there. The country’s compelling company governance is what would make Asia the excellent region that it is nowadays.

The Inventory Exchange

The Taiwan Inventory Exchange (TWSE) is the primary inventory trade in Taiwan that was launched on Oct 23, 1961. Having said that, the formal operations did not begin till February 9, 1962. The TWSE is owned by the TWSE Corp. and is controlled under the country’s Economic Supervisory Commission.

As of December 31, 2013, there are far more than 800 corporations mentioned on the TWSE, bringing its overall industry capitalization to NT$24.52 million. And Acer Inc is 1 of them.

The TWSE is composed of several sectors but the engineering sector appears to be flourishing the most. Taiwan is home to some of the biggest electronics agreement producers around the globe this kind of as the Hon Hai Precision Market and Taiwan Semiconductor Production (TSMC).

Pre-industry investing on the TWSE begins at 7:40 a.m. and lasts for an hour, ending at 8:40 a.m. It is then followed by standard investing that begins at 9:00 a.m. and lasts for 4 hours and 45 minutes, ending at 1:45 p.m. Finally, submit-industry investing begins at 2:00 p.m. and lasts fot an hour, ending at 3:00 p.m.

The Index

The Taiwan Capitalization-Weighted Inventory Index (TAIEX) is the benchmark index in Taiwan. It tracks all the stocks mentioned on the TWSE with the exception of total-supply stocks, desired stocks, and all those that have not been mentioned for at least 1 calendar thirty day period. Consequently, the TAIEX is a sturdy indicator of the health of the Taiwanese economic system.

When the TAIEX experienced 1st been published in 1967, its foundation benefit of 100 points has a foundation day of 1966.

The TAIEX has an all-time higher of 10,202.20 points, which was final witnessed in 2000. The surge all through that time was mostly pushed by the gradual recovery of the Taiwanese economic system right after suffering from a considerable meltdown brought about by the earthquake that experienced strike Taiwan in 1999.

On the other hand, its all-time small of 3,446.26 points was final witnessed in 2001. The TAIEX experienced dipped to that amount a thirty day period right after the terrorist assaults in the US, a tragedy commonly identified as 9/11. The bombing of the twin towers experienced led to a popular stress among the buyers that rippled all more than the entire world. As for Taiwan, its export sector experienced mostly been influenced. Acer Inc did not unfold the stress.

Asia is commonly envisioned to make up about 26% of the global fiscal prosperity by 2019. With Taiwan carrying out its have expansion potentials, it is 1 of the biggest contributors to this future feat. This is why the TWSE is 1 of the most intriguing equity marketplaces nowadays. Investments will definitely guide to worthwhile returns, an important thought for quick-phrase and very long-phrase buyers alike.

A lot more noteworthy latest Acer Inc (TPE:2353) information had been published by: Globenewswire.com which unveiled: “Acer Therapeutics Welcomes Two New Members to its Board of Directors” on Oct 12, 2017, also Prnewswire.com with their write-up: “New Acer Chromebook 15 with Aluminum Style and design Makes Entertainment A lot more Pleasurable …” published on August 30, 2017, Prnewswire.com published: “Acer Announces the New Aspire S24, Its Slimmest-Ever All-in-A single Desktop PC” on August 30, 2017. A lot more intriguing information about Acer Inc (TPE:2353) had been unveiled by: Forbes.com and their write-up: “Acer Goes Soon after Informal Cell Gamers With Sleek, GeForce-Driven Nitro 5 Spin …” published on August 25, 2017 as perfectly as Globenewswire.com‘s information write-up titled: “Acer Therapeutics and Opexa Therapeutics Close Merger and Financing” with publication day: September 19, 2017.

Acer Incorporated is a Taiwan-primarily based company principally engaged in the investigation, improvement, style, manufacture and distribution of particular pcs and notebook pcs. The corporation has industry cap of $48.61 billion. The Firm offers its merchandise under the brands named Acer, Gateway, Packard Bell and eMachines, like desktop PCs, notebook pcs, shows, servers, as perfectly as computer system peripheral merchandise and other merchandise. It at present has unfavorable earnings. The Firm distributes its merchandise inside domestic industry and to overseas marketplaces, like Europe, the Americas and the rest of Asia.

Get News & Scores By using Electronic mail – Enter your e-mail handle underneath to acquire a concise daily summary of the most up-to-date information and analysts’ scores with our No cost daily e-mail publication.

By Darrin Black

Parallels Desktop 13 for Mac gains APFS, HEVC, VR support in update


 

Virtualization solution Parallels Desktop 13 for Mac was updated on Thursday to take full advantage of hardware and software technologies supported by macOS 10.13 High Sierra, including the Apple File System and HEVC codec.

Pushed out to users earlier today, Parallels Desktop 13.1 contains a handful of key features that bring the software in full alignment with macOS High Sierra, Apple’s recently released next-generation Mac operating system.

Among the most important changes is support for the Apple File System, or APFS. A replacement for the outgoing HFS+, APFS was designed as a fundamentally secure file system that integrates seamlessly across Apple’s four major platforms: macOS, iOS, watchOS and tvOS. The file system was built to take advantage of flash memory, a storage technology used in an increasing number of Apple devices.

Aside from APFS, Parallels Desktop 13 for Mac now supports the HEVC video codec, Apple’s system of choice for iOS 11 and macOS High Sierra. Also called H.265, the advanced codec promises better compression — file sizes up to 40 percent smaller than H.264 — smooth playback and overall higher quality images.

Today’s update also delivers virtual reality support and compatibility with Steam VR and the HTC Vive VR headset.

Finally, the latest version of Parallels Desktop 13 includes various improvements to core apps and utilities. The update incorporates the usual assortment of bug fixes and performance improvements, some notable changes being:

  • Enables the user to create a new Boot Camp® virtual machine on a Mac® with macOS® High Sierra.
  • Enables the user to install a High Sierra virtual machine from the Recovery partition on their High Sierra Mac.
  • Resolves an issue with installing Parallels Tools on Windows XP (Note: Parallels Tools are used for Windows and macOS integration. Do not confuse them with Parallels Toolbox.)
  • Resolves an issue with Windows not starting when opening a file associated with a Windows application on macOS.
  • Resolves an issue with “Sending as Attachment” not working for Windows files and Mac email client after suspending and resuming a Windows virtual machine.
  • Resolves an issue with OneDrive for Business not shared with macOS, even if that option is enabled.
  • Resolves an issue with copying Windows files to Mac.
  • Resolves an issue with installing a macOS older than Mac OS X® Mavericks 10.9 in the virtual machine from the installation image.

Parallels Desktop 13 for Mac saw release in August with macOS High Sierra “readiness,” meaning certain features were not supported at launch.

Parallels Desktop 13 for Mac sells for $79.99 from the Parallels online store, while existing users can upgrade for $49.99. Alternatively, Parallels Desktop Pro Edition is sold on a $99.99 annual subscription, discounted to $49.99 to existing Parallels 12 or 13 users.

La flota de vehículos autónomos de General Motors ya lleva 13 accidentes en 2017, y la culpa es nuestra

Hace cosa de tres meses, General Motors anunció que su flota de prototipos de Chevrolet Bolt EV equipados con tecnología autónoma ya estaba lista para rodar por Estados Unidos (San Francisco, Scottsdale y Detroit). No, no conducirían solos pues precisarían de un conductor sentando tras el volante durante las pruebas. Y menos mal.

El gigante de Detroit ha dejado caer a las autoridades Californianas, donde su flota de vehículos autónomos se ha doblado, que sus coches se vieron envueltos en nada menos que seis accidentes en septiembre, 13 en lo que va de año. Según Reuters, los accidentes (sin gravedad) los provocaron otros conductores; en el caso de San Francisco, un ciclista ebrio se llevó uno de los sensores al golpearse con el Chevy Bolt.

Continue reading “La flota de vehículos autónomos de General Motors ya lleva 13 accidentes en 2017, y la culpa es nuestra”

Apple Watch Series 2 as low as $289 ($80-$120 off); 2017 13″ Touch Bar $1,649 ($150 off); high-end 15″ MacBook Pro $2,099 ($700 off)

Kicking off the last week in September, Apple authorized resellers are rolling out discounts on remaining Apple Watch 2 devices with savings of $80 to $120 off. Apple’s Mid 2017 13″ MacBook Pro with Touch Bar is also $150 off, while the Late 2016 high-end 15″ MacBook Pro is $2,099 with no tax outside NY and NJ.

Apple Deals September 2017

Those on the lookout for cash savings on the Apple Watch can save instantly on Series 2 models this week with markdowns of up to $120 off. Apple’s latest 13-inch MacBook Pro with Touch Bar line is also up to $200 off with prices starting at $1,649.99. Looking for the ultimate price drop? The Late 2016 15-inch MacBook Pro is $700 off, bringing the price down to $2,099 with no tax collected on orders shipped outside NY and NJ.

$80 to $120 off Apple Watch Series 2

Apple Watch 2 deals

Now that the Apple Watch Series 3 has landed, Apple authorized reseller B&H Photo is clearing out remaining Series 2 inventory with instant discounts of $80 to $120 off. Each Apple Watch 2 also qualifies for free expedited shipping with no tax collected on orders shipped outside of NY and NJ, which, for many shoppers outside those two states, equates to another $30 to $55 in savings on average. If you don’t need LTE capability found in the new GPS + Cellular models, these Series 2 discounts offer shoppers the lowest prices available, according to our Apple Watch Series 2 Price Guide.

Apple Watch Series 2
38mm, Space Gray Aluminum, Black Sport Band for $289.00 * ($80 off + no tax outside NY & NJ)
42mm, Space Gray Aluminum, Black Sport Band for $319.00 * ($80 off + no tax outside NY & NJ)
42mm, Silver Aluminum, White Sport Band for $319.00 * ($80 off + no tax outside NY & NJ)
38mm, Stainless Steel, Milanese Loop Band for $539.00 * ($110 off + no tax outside NY & NJ)
42mm, Space Black Stainless, Space Black Milanese Loop Band for $629.00 * ($120 off + no tax outside NY & NJ)
*B&H will not collect sales tax on orders shipped outside NY and NJ

Add AppleCare+
You can easily tack on an AppleCare+ extended protection plan to these Apple Watch 2 devices for $49 by selecting the AppleCare option immediately after you press the “Add to Cart” button on B&H’s website.

2017 13″ Touch Bar for $1,649.99 ($150 off)

13 inch MacBook Pro with Touch Bar 2017 deal

13″ (3.1GHz 8GB 256GB) in Space Gray for $1,649.99 @ Amazon ! ($150 off + special financing offer)
13″ (3.1GHz 8GB 256GB) in Silver for $1,649.99 @ Amazon ! ($150 off + special financing offer)
! No interest if paid in full within 12 months using the Amazon.com Store Card. See site for terms & conditions.

Amazon.com is also offering shoppers instant savings on Apple products this week. The Mid 2017 13-inch MacBook Pro with Touch Bar (3.1GHz, 8GB, 256GB SDD) is on sale for $1,649.99 in Silver (model MPXX2LL/A) and Space Gray (model MPXV2LL/A), a discount of $150 off MSRP. Plus, Amazon.com Store Cardholders qualify for no interest if paid in full within 12 months using the Amazon.com Store Card. According to our 13-inch MacBook Pro with Touch Bar Price Guide, this price is the lowest available from an Apple authorized reseller without factoring in sales tax. Live outside NY and NJ? B&H and Adorama have the same model for $1,679.00 with no tax collected in 48 states.

Additional configurations
13″ (3.1GHz 8GB 512GB) in Space Gray for $1,849.00 @ Adorama % ($150 off + no tax outside NY & NJ)
13″ (3.1GHz 8GB 512GB) in Silver for $1,849.99 @ Amazon ! ($150 off + no tax outside NY & NJ)
13″ (3.1GHz 16GB 256GB) in Space Gray for $1,899.00 @ B&H * ($100 off + no tax outside NY & NJ)
13″ (3.5GHz 16GB 1TB) in Space Gray for $2,699.00 @ Adorama * ($200 off + no tax outside NY & NJ)
13″ (3.5GHz 16GB 1TB) in Space Gray for $2,699.00 @ B&H * ($200 off + no tax outside NY & NJ)
% Price with promo code APINSIDER using the Adorama pricing link above.
! No interest if paid in full within 12 months using the Amazon.com Store Card. See site for terms & conditions.
* B&H and Adorama will not collect sales tax on orders shipped outside NY and NJ.

Add AppleCare+
You can easily tack on an AppleCare+ extended protection plan to these 2017 13″ MacBook Pros with Touch Bar for $269 by selecting the AppleCare option immediately after you press the “Add to Cart” button at B&H and Adorama. Adorama is also clearing out remaining stock of boxed AppleCare (not AppleCare+) with a $60 instant discount, bringing the price down to $189.99.

High-end 2016 15″ MacBook Pro for $2,099

Apple 15 inch MacBook Pro sale

15″ (2.7GHz 16GB 512GB Radeon 455) in Space Gray for $2,099.00 *
($700 off + no tax outside NY and NJ)
15″ (2.7GHz 16GB 512GB Radeon 455) in Silver for $2,099.00 *
($700 off + no tax outside NY and NJ)
* B&H will not collect sales tax on orders shipped outside NY & NJ.

Those looking for the greatest savings on a high-end 15-inch MacBook Pro can also take advantage of a $700 markdown on the Late 2016 model in both Space Gray and Silver. This particular configuration features an upgraded 2.7GHz processor over the standard model with 512GB of storage and Radeon 455 graphics. Factor in the lack of sales tax collected on orders shipped outside NY and NJ, and many shoppers can save another $165. According to our Late 2016 15-inch MacBook Pro Price Guide, this deal provides consumers with the lowest price available from an Apple authorized reseller with many retailers completely sold out.

Step up to a 2017 model and save up to $300
15″ (2.8GHz/16GB/256GB/Radeon 555) in Space Gray for $2,199.00 @ Adorama *
($200 off + no tax outside NY & NJ)
15″ (2.8GHz/16GB/256GB/Radeon 555) in Space Gray for $2,199.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (2.8GHz/16GB/256GB/Radeon 555) in Silver for $2,199.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (2.8GHz/16GB/512GB/Radeon 555) in Silver for $2,429.00 @ B&H *
($170 off + no tax outside NY & NJ)
15″ (2.9GHz/16GB/512GB Radeon 560) in Space Gray for $2,599.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (3.1GHz/16GB/512GB/Radeon 560) in Space Gray for $2,799.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (3.1GHz/16GB/1TB/Radeon 560) in Space Gray for $3,199.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (3.1GHz/16GB/1TB/Radeon 560) in Silver for $3,199.00 @ B&H *
($200 off + no tax outside NY & NJ)
15″ (3.1GHz/16GB/2TB/Radeon 560) in Space Gray for $3,899.00 @ B&H *
($300 off + no tax outside NY & NJ)
15″ (3.1GHz/16GB/2TB/Radeon 560) in Silver for $3,899.00 @ B&H *
($300 off + no tax outside NY & NJ)
(See deals on even more configurations…)
* Adorama and B&H will not collect sales tax on orders shipped outside NY & NJ.

Add AppleCare+
You can easily tack on an AppleCare+ extended protection plan to these 15″ MacBook Pros with Touch Bar for $379 by selecting the AppleCare option immediately after you press the “Add to Cart” button on B&H’s site.

Additional Apple Deals

Apple Price Guides

AppleInsider and Apple authorized resellers are also running a handful of additional exclusive promotions this month on Apple hardware that will not only deliver the lowest prices on many of the items, but also throw in discounts on AppleCare, software and accessories. These deals are as follows:

See if there is a Mac, iPad, Apple Watch or Certified Used iPhone deal that will save you $100s by checking out prices.appleinsider.com and deals.appleinsider.com.

Nintendo Direct – September 13, 2017 – NintendoFuse

Nintendo just spent 45 minutes flying through information, trailers, and release dates for upcoming Nintendo 3DS and Nintendo Switch titles. Seriously, I could barely keep up at times!

If you missed the presentation, feel free to check out the entire video archive below. We’ve also put a huge list of everything they covered just below the video.

Be sure to let us know what you think, especially if there is a particular game you are looking forward to playing!

For those who prefer the written form of communication, or were like us and missed half of what they announced, here’s the full list:

Nintendo Switch

  • Super Mario Odyssey: New information about Mario’s upcoming adventure was revealed during the presentation, including more story details, locations and modes. Additionally, a special hardware bundle that includes a download code for the game, Mario-themed red Joy-Con controllers and a special carrying case will be available at a suggested retail price of $379.99. Super Mario Odyssey lands exclusively on Nintendo Switch on Oct. 27.
  • Xenoblade Chronicles 2: This massive sequel takes place in the world of Alrest on the backs of giant Titans. The journey through the clouds begins when Xenoblade Chronicles 2 lands on Nintendo Switch on Dec. 1. Alongside the standard version, fans can also pick up a special edition of the game that includes a sound selection CD, a special metal game case and a 220-page hardbound art book at a suggested retail price of $99.99. A Nintendo Switch Pro Controller themed around Xenoblade Chronicles 2 will also be available on Dec. 1 at a suggested retail price of $74.99.
  • Project Octopath Traveler (working title): The producers of Bravely Defaultat Square Enix present a new RPG brought to life through a mixture of CG, pixel art and visual wizardry. Project Octopath Traveler launches worldwide in 2018. But fans can try out a free demo for the game in Nintendo eShop on Nintendo Switch starting … today!
  • DOOM and Wolfenstein II: The New Colossus: Bethesda Softworks is bringing the iconic DOOM and Wolfenstein II: The New Colossus to Nintendo Switch. The fast-paced action of DOOM will hit this holiday season, while Wolfenstein II: The New Colossus launches in 2018.
  • Kirby Star Allies: Revealed at E3 2017, the first Kirby game for Nintendo Switch has some charming new tricks. By throwing hearts, players can recruit up to three enemies to become Kirby’s allies. Whether playing alone or with up to three friends**, mixing up abilities to create new powers is a big part of the fun. Kirby Star Allies launches exclusively for Nintendo Switch this spring.
  • Splatoon 2: Back by popular demand, the Kelp Dome stage is returning as part of a free software update on Sept. 15. A new stage called Snapper Canal and an extra-large Brella weapon called the Tenta Brella are coming in the future.
  • ARMS: A free software update that goes live today allows players to remap the game’s controls to the buttons of their choice, and adds the new playable fighter Lola Pop.
  • Fire Emblem Warriors: Originally seen in the Fire Emblem game for the Game Boy Advance system, fan-favorite character Lyndis (or “Lyn,” as all her companions call her) was announced as part of the sprawling cast of Fire Emblem Warriors. The action-packed game launches for Nintendo Switch on Oct. 20, also available as part of a special-edition bundle.
  • Arcade Archives: Some of Nintendo’s classic arcade games are coming to Nintendo Switch, starting with Arcade Archives: Mario Bros., which launches Sept. 27. Others, like VS. Super Mario Bros., VS. Balloon Fight, VS. Ice Climber, VS. Pinball and VS. Clu Clu Land are coming soon. These arcade games contain subtle differences that can’t be found in their NES counterparts.
  • Snipperclips Plus: Cut it out, together!: This expanded version of the original snipping-and-clipping puzzle game includes more than 30 new stages, new challenges and new features … and it’s coming to stores for the first time! Players who already own the original digital version of the game can purchase all the new content in Nintendo eShop as DLC for $9.99. Snipperclips Plus: Cut it out, together! launches on Nintendo Switch on Nov. 10 at a suggested retail price of $29.99.
  • The Elder Scrolls V: Skyrim: The open-world masterpiece from Bethesda Game Studios can be played anytime and anywhere on Nintendo Switch. The Elder Scrolls V: Skyrim launches on Nov. 17.
  • Rocket League: This new version of the popular rocket-powered sports-action game includes all the modes of the original, plus Nintendo Switch exclusives including Nintendo-themed Battle-Cars and customization items. Local wireless multiplayer*** will also be available when Rocket Leaguelaunches this holiday season.
  • Dragon Quest Builders: The hit fantasy game combines the fun of building with the combat of an action-RPG. The Nintendo Switch version will allow players to ride a Great Sabrecub in the game’s free build mode. The Sabrecub boosts players’ speed and grants them special materials by defeating enemies. Dragon Quest Builders launches this spring.
  • L.A. Noire: Rockstar Games is bringing L.A. Noire to Nintendo Switch on Nov. 14, including all of its downloadable content, new collectibles, detective suits with special abilities, a Joy-Con mode with gyroscopic gesture-based controls and new wide and over-the-shoulder camera angles. Plus, the hard-boiled game will include intuitive touch-screen controls for portable detective work.
  • NBA 2K18: With big game-play improvements and stunning graphics, NBA 2K18 will be a slam dunk for sports fans when it launches on Sept. 15 in Nintendo eShop on Nintendo Switch and in stores on Oct. 17.
  • EA Sports FIFA 18: The most immersive, social and authentic soccer game out there can be played anywhere on Nintendo Switch. FIFA 18 launches on Sept. 29.
  • WWE 2K18: To complete the trifecta of awesome sports games that can be played on the go, WWE 2K18 is also coming soon to Nintendo Switch. Launch details will be announced at a later date.
  • Lost Sphear: This modern take on traditional RPGs from Square Enix is coming to Nintendo Switch on Jan. 23.
  • Sonic Forces: Join the uprising by fighting back as Modern Sonic, Classic Sonic or one of many custom Hero Characters players can create in Sonic Forces, launching on Nov. 7.
  • Resident Evil Revelations / Resident Evil Revelations 2: The Nintendo Switch library will get two creepy survival horror classics when Resident Evil Revelations and its sequel, Resident Evil Revelations 2, both launch on Nov. 28.
  • Flip Wars: Fans that are enjoying the multiplayer fun of Flip Wars can enjoy a free update soon. Once downloaded, the update adds a new stage, new mechanics, local wireless multiplayer, Class Matches and a new online* battle mode.
  • Morphies Law: Players can change their size to change their powers in Morphies Law, a local and online* team-based multiplayer shooter launching first on Nintendo Switch as a console exclusive this winter.
  • Arena of Valor: Explore and command a roster of more than 35 fearless heroes in this free-to-start multiplayer online* battle arena game. With roles like Tanks, Assassins, Mages and Warriors, build a powerful team with friends to crush opponents in real-time battles. The Arena of Valor beta test version will be available for free this winter.
  • Nindies! Nindies! Nindies!: Dozens of indie games are coming over the next few months. These include the underground platforming action of SteamWorld Dig 2 and the golf-RPG Golf Story in September; the four-player** action game Nine Parchments and combo-based puzzle game Battle Chef Brigade this holiday season; randomly constructed sequel Super Meat Boy Forever in 2018; and tactical RPG Tiny Metal launching in the future.

amiibo

  • The Legend of Zelda Champions: Four amiibo figures based on the Champions from The Legend of Zelda: Breath of the Wild – Daruk, Mipha, Revali and Urbosa – launch in stores on Nov. 10 (sold separately). Tapping these amiibo while playing the game will summon special headgear for Link based on that Champion’s Divine Beast. Additional functionality for these amiibo will be revealed in the future.

Nintendo 3DS

  • Pokémon Ultra Sun / Pokémon Ultra Moon: The new games feature customizable main characters who embark on a new adventure. An untold story unfolds on a grand scale, where the Legendary Pokémon that steals light, Necrozma, has transformed into two new forms: Dusk Mane Necrozma, who took over Solgaleo, and Dawn Wings Necrozma who took over Lunala. Fans who purchase and activate**** the game by Jan. 10 can get a special gift Rockruff, who will evolve into a Dusk Form Lycanroc, new to the world of Pokémon Ultra Sun and Pokémon Ultra Moon. Fans who download the digital version of Pokémon Ultra Sun or Pokémon Ultra Moon by Jan. 10 will receive 12 Quick Balls. Finally, a cool Poké Ball-themed New Nintendo 2DS XLsystem will launch separately two weeks prior on Nov. 3.
  • White + Orange New Nintendo 2DS XL: On Oct. 6 the White + Orange edition of the New Nintendo 2DS XL system will launch in stores at a suggested retail price of $149.99.
  • Mario Party: The Top 100: For the first time, 100 of the top mini-games from the console Mario Party games can be played on a hand-held system. Mario Party: The Top 100 supports local Download Play, so up to four players who each own a Nintendo 3DS family system can enjoy the game together with only one game card. Take a tour through all the mischief, magic and memories the series has to offer when it launches on Nov. 10.
  • Kirby: Battle Royale: Kirby is about to enter a tournament against his toughest rival yet … himself! The pink puffball’s new game offers a variety of ways to fight in both single- and multiplayer modes. Kirby: Battle Royalelaunches on Jan. 19. Fans can head to https://kirby.nintendo.com/poll to celebrate Kirby’s 25th anniversary and vote for their favorite copy ability!
  • LAYTON’S MYSTERY JOURNEY: Katrielle and the Millionaires’ Conspiracy: Katrielle Layton, daughter of the famous Professor Layton, is on the case! Detectives-in-the-making that play the game on a Nintendo 3DS family system will receive an exclusive in-game Flora costume. The game launches on Oct. 6.
  • Mario & Luigi: Superstar Saga + Bowser’s Minions: When the game launches on Oct. 6, players will be able to use the new Goomba and Koopa Troopa amiibo – or the existing Boo amiibo – to get additional stamp sheets that offer items in both of the game’s modes. More information about amiibo compatibility will be revealed in the future.
  • YO-KAI WATCH 2: Psychic Specters: With new content and features, YO-KAI WATCH 2: Psychic Specters is the definitive version of YO-KAI WATCH 2. The game launches on Sept. 29, but anyone that owns any version of YO-KAI WATCH 2 can download a free “Oni Evolution” software update starting Sept. 14. It adds the Yo-kai Watch Psychic Blasters mode with additional bosses to battle against, the chance to befriend new Yo-kai and more. This update is also required to transfer save data from YO-KAI WATCH 2: Bony Spirits and YO-KAI WATCH 2: Fleshy Souls to the YO-KAI WATCH 2: Psychic Specters game.
  • Minecraft: New Nintendo 3DS Edition: Fans of Minecraft will have another way to play the hit creation game when it comes to New Nintendo 3DS systems. This portable version of the game comes with Survival and Creative modes, five skin packs and two texture packs. Minecraft: New Nintendo 3DS Edition launches in Nintendo eShop today. The packaged version will launch at a later date.
  • The Alliance Alive: Nine characters’ paths will converge in this old-school RPG from role-playing powerhouse ATLUS. The Alliance Alive launches in early 2018.
  • Radiant Historia: Perfect Chronology: The gorgeous launch edition of Radiant Historia: Perfect Chronology will include a collector’s box with an art book and decal sheet. Radiant Historia: Perfect Chronology launches for Nintendo 3DS in early 2018.
  • Etrian Odyssey V: Beyond the Myth: Starting today, fans of the Etrian Odyssey series can download a free demo for Etrian Odyssey V: Beyond the Myth in Nintendo eShop on Nintendo 3DS before the game launches this fall.
  • Shin Megami Tensei: Strange Journey Redux: When Shin Megami Tensei: Strange Journey returns on Nintendo 3DS in early 2018, expect it to do so with new story content, additional endings, a new dungeon to explore and enhanced graphics.
  • Two New Games, Two Beloved Series: The next game in the Ace Attorneyseries, Apollo Justice: Ace Attorney, launches in November. The action-packed Fire Emblem Warriors game launches Oct. 20 for New Nintendo 3DS systems only.

[Source: Nintendo PR]

Parallels Desktop 13 can turn your Mac into a perfect macOS/Windows 10 hybrid

Last month, Parallels launched the latest version of its virtualization software for Mac computers, Parallels Desktop 13, with many improvements for Mac users looking for an easy way to use Windows 10 on their Mac. While it’s possible to install Microsoft’s desktop operating system on your Mac hard drive using Apple’s Boot Camp tool, running Windows 10 in a virtual machine has many advantages: you won’t have to reboot your Mac every time you want to switch to Windows 10, and you have access to the best of both worlds right from macOS.

If you never used Parallels Desktop before, the latest version is really easy to use. When you install it, the assistant will download a copy of Windows 10 on your Mac, which you can use right away (no need to look for drivers, etc.). You will still need to activate it with a guenine Windows 10 license, but other than that, the installation process is seamless. And if you already used Bootcamp to install Windows 10 on your Mac, Parallels Desktop 13 can use this partition to create your virtual machine.

It’s worth noting that you can also use Parallels Desktop 13 to install previous versions of Windows, as well as Linux and even Android virtual machines. And you can run all of them at the same time on your Mac, which is really great for developers.

What’s new with Parallels Desktop 13

In addition to several new features that we’ll detail below, Parallels Desktop 13 brings welcome performance improvements. The company claims that you can now access Windows files up to 47% faster, and the virtualization software now supports OpenGL 3, offering similar performances to a native Windows 10 installation for games and other apps that use it.

If you have a recent Mac, Parallels Desktop 13 brings enhanced Retina Display support for Windows applications running in scaled mode. For those of you who bought the latest MacBook Pros with Touch Bar, Parallels Desktop 13 also introduces Touch Bar support for popular apps such as Microsoft Outlook, Excel and Powerpoint, Google Chrome and more. Power users can even customize Touch Bar actions for their favorite Windows app by using the new Touch Bar Wizard.

Interestingly, Parallels Desktop is ready for the Windows 10 Fall Creators Update: you can install the latest Insider builds without any issues, and there is a nice integration between macOS with the Windows 10 People Bar. Indeed, you can pin People Bar contacts right on your macOS dock, and you can actually pin more than three contacts, unlike Windows 10. Clicking on a pinned contact will open the contact card with shortcuts for Outlook Mail and Skype UWP, just like how it works on Windows 10.

You can pin Windows 10 contacts on your macOS dock.

The most interesting additions are probably the new Picture in Picture mode and the Coherence. With Picture in Picture mode, you can monitor all your virtual machines from your main macOS desktop. The small windows are actually active, which means that you can click in any app in your VMs and drag windows, type text and more.

The new Picture in Picture mode lets you easily monitor all your virtual machines.

You can also run your Windows 10 VM in Full Screen mode, which means that macOS will consider it as a virtual desktop. With the VM running in that mode, you can use four-finger swipes to easily switch between your main macOS desktop and your Windows 10 desktop. If you’d like to completely hide the Virtual Machine though, there is the new Coherence mode, which is probably the most interesting addition in this new release.

The new Coherence mode lets you use any Windows 10 app right from your main macOS desktop.

Coherence mode will make the Windows 10 VM completely disappear (it still runs in the background), but you can still use any Windows 10 app you want, including UWP apps and Cortana right on your macOS desktop. This is truly the best of both worlds, especially if you miss Windows 10 apps such as Groove Music or Photos on your Mac.

Parallels Desktop 13 also supports Windows Ink, which is a surprising addition considering that Apple still refuses to make Mac computers with touch screens. Actually, Parallels wants you to use Windows Ink on your iPad: while we couldn’t test it on our own, there is a separate Parallels Access app for iOS that lets you run Windows 10 on your iPad Pro, and you can use digital inking on it with Apple’s Pencil. That’s probably a pretty niche use case, but it’s still nice to have in case you’re looking to put your iPad Pro to good use.

Last but not least, Parallels Desktop 13 brings interesting features for enterprise users. On Parallels Desktop for Mac Business Edition, IT Admins can use the Single Application mode to only let users work with selected Windows applications. This will completely hide the Parallels Desktop interface, Windows installations, and virtualization, and this is probably best for users who are not really technical.

What we really like about PD13

If you never tried running Windows 10 on a VM on your Mac before, there is really a lot to like about Parallels Desktop 13. The most fascinating part is probably the new Coherence mode, turning macOS into a hybrid OS with full access to Windows 10 apps. Here are some of the most cool things you can do with Parallels Desktop 13:

  • You can use Cortana (including voice commands) to launch Windows 10 or Mac apps,
  • You can add shortcuts to your favorite Windows 10 apps right on your macOS dock.
  • You can drag and drop, copy and paste seamlessly between macOS and Windows.
  • In Coherence mode, you still get notifications for Windows 10 apps, and the Windows 10 action Center is still available
  • Parallels Desktop 13 also comes with Parallels Toolbox for Mac, which includes various tools for cleaning your drive, create GIFs, download web videos, etc.

Parallels Desktop 13 is not the only virtualization software for macOS on the market, but this latest major version really pushes it to the next level. If you’re interested,  Parallels Desktop 13 for Mac is $79.99 for a new license, but you can try it for free for 14 days or upgrade from version 11 or 12 for just $49.99. For more advanced users, Parallels Desktop for Mac Pro Edition and Parallels Desktop for Mac Business Edition are are both available to new customers for $99.99 per year. You can learn more about the different editions on the company’s website.


Further reading: macOS, Parallels, Parallels Desktop 13, Virtual Machines, Virtualization, Windows 10

Run Windows on macOS with the Parallels Desktop 13, PCs News & Top Stories

Parallels Desktop 13 (PD13) for Mac is the latest version of Parallels’ virtualisation software that lets users run Windows on macOS.

PD13 supports Apple’s latest macOS High Sierra, which should be arriving by next month. It also supports the upcoming Windows 10 Fall Creators Update.

For a start, Parallels has streamlined the Windows installation process. After installing PD13, it will automatically prompt you to install Windows 10. Just one more click and Windows 10, which you can purchase later, will be installed as a virtual machine (VM).

While most people buy PD13 to run Windows, you can also install other operating-system VMs such as Ubuntu Linux, Android or macOS High Sierra beta.

Get The Straits Times
newsletters in your inbox

PD13 is said to offer up to 40 per cent faster USB device performance, as much as 50 per cent faster performance when working with Windows files in Windows VM, and up to 100 per cent faster external Thunderbolt solid-state drive performance.

However, I was not able to test these performance claims, as the previous PD12 was installed on my old 2012 MacBook Air. For this review, I installed PD13 along with a new Windows 10 VM on my 15-inch Touch Bar MacBook Pro.

For me, the biggest feature of PD13 is the Touch Bar support for the Touch Bar MacBook Pro.

  • TECH SPECS

    PRICE: US$79.99 (S$108)

    SYSTEM REQUIREMENTS: A Mac computer with an Intel Core 2 Duo, Core i3, Core i5, Core i7, Intel Core M or Xeon processor, at least 4GB of RAM (8GB recommended), at least 850 MB of space available on the boot volume

    SUPPORTED SOFTWARE: macOS High Sierra 10.13 (when available), macOS Sierra 10.12 or later, OS X El Capitan 10.11.5 or later, OS X Yosemite 10.10.5 or later

    RATING

    FEATURES: 4/5

    PERFORMANCE: 4/5

    VALUE FOR MONEY: 4/5

    OVERALL: 4/5

This is how it works: When a Windows 10 VM is launched, PD13 will automatically replicate the Windows Task Bar on the Macbook Pro’s Touch Bar, displaying icons for pinned applications such as Cortana, File Explorer, Task View or Microsoft Edge.

In addition, when you launch Microsoft Edge Web browser, you will see Edge’s Front/Back, Download and Refresh buttons duplicated on the Touch Bar. It is the same for Microsoft Office applications such as Outlook, Word, Excel and PowerPoint.

There is also a Touch Bar customisation tool that lets you change and move icons around, if the pre-defined controls are not to your liking.

The tool is really easy to use, as it is quite similar to what macOS offers. On the Windows VM top menu, go to View and click on Customize Touch Bar. If you are using Microsoft Edge, for instance, you will be presented with the default menu bar and other control icons at the bottom of the display. Just drag-and-drop the control icons you want to the Touch Bar.

This is really the killer feature as you can enjoy the benefits of the Touch Bar with Windows applications by just using PD13.

•Verdict: Parallels Desktop 13 for Mac continues to be the definitive virtualisation software for Mac users to run Windows on the computer – especially so if they are using Touch Bar MacBook Pros. 

Nvidia, Facebook and 13 other companies are the real earnings-season winners

Investors have been heartened by U.S. companies’ second-quarter earnings. But they ought to look beyond the headlines to find out what’s really going on with companies that have exceeded analysts’ earnings forecasts.

With 93% of S&P 500

SPX, -1.54%

 companies having reported results for quarters ending May 28 or later, we’ve listed those that have increased their sales per share the most, while also improving their gross profit margins.

Sales per share takes into account any dilution caused by the issuance of shares for any reason. Shares are often issued to fund an acquisition, so if sales-per-share go up after a merger is completed, it’s a good sign that the dilution was “worth it” for the acquiring company’s shareholders, or that the acquisition was partly or fully paid for with cash. The share count can also increase from stock-based compensation to executives — which companies routinely exclude from the adjusted-earnings figures that drive the “beat” or “miss” headlines. The per-share numbers also reflect any reduction to the share count caused by companies’ repurchase of stock.

More on creative accounting and possible remedies:

• Here’s how investors are duped each earnings season

• The SEC is cracking down on made-up earnings numbers. We crunched the numbers — it hasn’t helped

• Take-Two is one of five companies to say new accounting rules will have a material impact

• Netflix needs to address new accounting standards if it continues licensing content

• Target revises reporting after SEC calls out non-GAAP gross margin

• Amazon says new accounting rule will change when it recognizes sales of its devices

A company’s gross margin is its sales, less the cost of goods or services sold, divided by sales. It is a measure of the profitability of a company’s core business and, in the list below, is calculated by FactSet using GAAP numbers, not companies’ “adjusted” numbers.

So if a company increases its sales per share significantly, while its gross margin also increases, it’s a good sign that it didn’t need to offer huge discounts to juice sales. This provides a basis for further research as you consider which companies to invest in.

Here are the 15 S&P 500

SPX, -1.54%

 companies that increased their sales per share the most, for the most recently reported quarters through Aug. 16, while also improving their gross margins:

Company Ticker Sales per share – most recent reported quarter Sales per share – year earlier Increase in sales per share Gross margin – most recent reported quarter Gross margin – year earlier
Molson Coors Brewing Co. Class B

TAP, -1.51%

$14.29 $4.57 212% 40.75% 40.59%
EQT Corp.

EQT, +0.48%

$3.70 $2.08 78% 30.92% -8.21%
Micron Technology Inc.

MU, -3.49%

$4.73 $2.80 69% 46.87% 17.18%
Cabot Oil & Gas Corp.

COG, -0.04%

$0.96 $0.59 62% 32.94% -4.86%
Nvidia Corp.

NVDA, -2.23%

$3.52 $2.26 56% 58.39% 57.84%
Ameriprise Financial Inc.

AMP, -3.22%

$19.11 $12.45 54% 49.27% 46.20%
Cimarex Energy Co.

XEC, -0.27%

$4.89 $3.21 52% 47.71% 21.45%
Applied Materials Inc.

AMAT, -3.01%

$3.26 $2.19 49% 44.78% 41.14%
Range Resources Corp.

RRC, -0.40%

$2.29 $1.58 45% 19.73% -26.49%
Lam Research Corp.

LRCX, -2.92%

$12.58 $8.70 45% 45.59% 45.19%
Facebook Inc. Class A

FB, -1.82%

$3.16 $2.22 43% 86.73% 85.75%
Anadarko Petroleum Corp.

APC, -1.90%

$5.46 $3.90 40% 23.46% 12.27%
EOG Resources Inc.

EOG, -1.88%

$4.52 $3.35 35% 17.03% 1.94%
Netflix Inc.

NFLX, -2.29%

$6.24 $4.80 30% 31.71% 30.03%
Halliburton Co.

HAL, -1.60%

$5.69 $4.46 28% 9.74% 2.69%
Source: FactSet

The tremendous increase in sales per share for Molson Coors Brewing Co.

TAP, -1.51%

reflects its October purchase of the 58% stake in MillerCoors that had been held by SABMiller PLC

SBMRF, -12.54%

 

You can click on the tickers for more information, including news coverage, valuation ratios, estimates, charts, filings and financial reports.

FactSet doesn’t calculate gross margins for about 10% of S&P 500 companies (mostly banks and insurance companies), because other measures of profitability are used in certain industries. So in order to represent them, we have listed the 10 companies for which gross margins are not available, that have had the highest returns on common equity during the most recent quarter:

Company Ticker Return on common equity – most recent quarter Return on common equity – year earlier Total return – 3 years Total return – 5 years
Mastercard Inc.

MA, -1.13%

74.86% 62.23% 80% 219%
Interpublic Group of Companies

IPG, -1.95%

28.27% 25.00% 14% 113%
Discover Financial Services

DFS, -2.93%

20.97% 20.90% 8% 78%
UnitedHealth Group Inc.

UNH, -0.55%

20.94% 17.81% 150% 294%
Aon PLC

AON, -1.88%

 

18.35% 24.28% 71% 178%
Humana Inc.

HUM, +0.12%

16.62% 9.35% 110% 281%
Progressive Corp.

PGR, -0.41%

16.33% 13.92% 113% 199%
Cigna Corp.

CI, -1.02%

16.12% 16.25% 94% 305%
Affiliated Managers Group Inc.

AMG, -3.02%

15.59% 16.77% -10% 50%
Everest Re Group Ltd.

RE, -1.65%

14.40% 9.74% 75% 182%
Source: FactSet

Seven of these companies with high returns on common equity have beaten the S&P 500’s 35% three-year return, while eight have beaten the index’s 94% five-year return.

13 Expert Tips to Help You Build Your Instagram Following

Instagram has recently undergone dramatic changes in the way it operates recently. On the positive side, you can now post pictures that aren’t square, you can like comments, added Instagram stories and improved their editing tools and filters and you can also go live. On the negative side, they have disabled the ability of many third-party utilities that can help people manage accounts. Worst of all, they have changed the way their algorithm works, and for many, myself included, engagement (likes and comments) has been diminishing.

I was an early adopter of social media and have used social media to build a powerful personal brand. As a result, I am not taking the decrease in my Instagram engagement passively. So, I sought some tips and tricks from other Instagram experts so I could get back on the right track. Here are 13 tips from six successful Instagrammers that will help you build a powerful and engaged Instagram:

Pablo Arias

Arias is a former Nickelodeon star who appeared on the TV Show ‘Marvin Marvin.’ He has worked with P Diddy, Tai Lopez, Fashion Nova. His public Instagram page, couplesofsociety has over 3 million followers.

1. Creative content

Being creative and original is the best way to gain a potential follower’s attention. Initially appealing to a specific demographic or area of interest will allow you to get your foot in the door and begin the process of trial and error. Your content’s creativity will always be the deciding variable in your ability to get a post to go viral. Content will always be king. 

2. Cross-promoting your brand

Find accounts that do similar things to what you are trying to do with your brand and help each other. For example, if you are building a fashion blog, find other bloggers in that space with a similar following and promote one another. If you both have 2,000 followers, the goal would be to both attain 5,000 followers. Then, you can make another partner — this time, one with 5,000 followers, just like you. And so on.

3. Show you care

The small act of following someone back or responding to a post can go a long way. Engaging with your followers — and making an authentic effort to show that you genuinely care about them — matters. It shows you are indeed a real person with similar interests as your audience, and it gives your followers the chance to truly connect with you.

4. Hashtags

Posts with hashtags perform a little better on average than ones without. As simple as it may sound, hashtags do make a difference — just make sure to avoid the banned ones. 

Related: 5 Social Media Rules Every Entrepreneur Should Know

Allison Mayer

Mayer is a humanitarian photojournalist. She has 13,000 Instagram followers, and her engagement rate is particularly impressive.

5. Call Ghostbusters

Locate the people who follow you, but don’t interact with you — also known as your ghost followers. Some of these may be spam or inactive accounts, but a lot of them are real people who just don’t see your content anymore because of the algorithm. Interaction with these accounts will remind them why they followed you in the first place and bring you back into their feed.

It’s a lot easier to convert engagement from existing followers than it is to get new followers or chase engagement with hashtags. A higher engagement rate from those who follow you will increase your credibility within the algorithm and your likeliness to hit the explore page, leading to more followers.

Irina Smirnova

A New York-based photographer working with entrepreneurs on their branding portraiture, Smirnova has 5,000 Instagram followers and runs a nationwide Instagram POD.

6. One theme only

Pick a theme for your account, focus on it, and don’t just jump all over the place. If your Instagram feed has food pics, memes, dogs and images of a sunrise over Detroit, people won’t know what you stand for and won’t be as attracted to you.

7. Educate your audience

Don’t be afraid to tell your followers and the people you follow exactly how they can support you better. Let them know how meaningful their comments are. Remember: There’s (usually) a real person behind every Instagram post.

Related: 10 Laws of Social Media Marketing

Arias, Mayer and Smirnova

8. Join a POD

All three experts recommended joining an engagement pod, which is simply a group of 10 fellow Instagrammers who agree to like and comment (with five words or more) each other’s posts within the first 30 minutes This, according to most sources, accelerates the Instagram algorithm and boosts your post to more people. I’m in a pod, and we share our Instagram links via WhatsApp, but most pods use Instagram messenger to communicate.

Paul Mango

Mango is a banking executive who is passionate about photography, his family and the outdoors. He has 4,600 Instagram followers.

9. Edit your photos

To post your best work, edit your photos with a more sophisticated editor than what is in Instagram itself. Snapseed by Google and Photo Editor by Aviary are excellent free editing apps.

Rick Gerrity

Rick is a New York- and New Jersey-based professional photographer. He is known for chasing pictures everywhere in his Nissan Xterra, which has 400,000 miles on the odometer.

10. Be original

I like to see what everyone else is doing and try a different angle or perspective. It’s why I enjoy Instagramming close-up portraits of people in interesting places. It’s is a one-of-a-kind moment. And,, don’t forget black and white is sometimes better than color.

Related: 12 Social Media Mistakes That Entrepreneurs Make

Me

11. Quality photos

Maybe it’s because photography is in my DNA, but it irks me when people post poor quality images. How can you expect someone to like something on social media that they don’t actually like? Poor quality pictures are the surest way to get low interaction. Take your time, use a real camera if you can (it’s no longer taboo) and post attractive images.

12. Don’t be like Google

Arias’s point about being creative and Gerrity’s point about originality. If you can find the picture you are posting on Google, you are not going to get a lot of engagement. If you take a picture of a common place or thing, make sure it’s amazing.

13. Work at it

Social media is not something you do for a little while. It’s a lifestyle. That means posting at least five times per week and liking and commenting on other posts every day.

The path to Instagram success may travel through an underground world, but you now have a map to navigate it.