C.J. Muse Gets Nvidia – NVIDIA Corporation (NASDAQ:NVDA)

0
0

Rethink Technology business briefs for September 15, 2017.

Evercore ISI analyst raises Nvidia price target to $250

Source: Yahoo

Evercore analyst C.J. Muse raised his price target to $250, and Nvidia (NASDAQ:NVDA) shares closed up over 6% for the day. Normally, I regard such upgrades (or downgrades) and the subsequent market reaction as a form of noise.

This is not a judgment on the analysts, but simply reflects the random effect of such market moves. In the case of Muse, I’m not sure I agree with his price target, even though I’m long Nvidia, but I like what he had to say about the company. As quoted by Fortune:

Supported by first mover advantage, its unified graphics processing unit architecture and a system level approach (including an extensive CUDA software ecosystem supported by $10 billion plus in historical research and development dollars), Nvidia has created an industry standard for AI systems that will be nearly impossible to replicate.

This echoes my own opinion that competing AI/GPU solutions that rely on open source software aren’t really competitive, given the large software development effort that Nvidia has made.

Tiernan Ray over at Barron’s quotes Muse more extensively. Here’s a snippet I think is particularly germane to my investment thesis for Nvidia:

Since its introduction in 2006, CUDA has been widely deployed through thousands of applications and published research papers, and supported by an installed base of hundreds of millions of CUDA- enabled GPUs. Nvidia has created an ecosystem over the past 10+ years and a business model that leverages Nvidia’s software and algorithm expertise above and beyond its leading- edge silicon designs.

A fundamental theme of mine has been the paradigm shift away from commodity processor manufacturers like Intel (NASDAQ:INTC) and AMD (NASDAQ:AMD), which Nvidia had been, to a new paradigm in which companies such as Apple (NASDAQ:AAPL) and Nvidia offer integrated hardware/software solutions. Nvidia’s software investment has provided a key discriminator in the datacenter and among AI and supercomputer users that has been largely ignored by analysts.

Ambitious price targets are nice, but I’m more gratified to see this kind of deeper conceptual understanding of Nvidia on the part of financial analysts. Muse definitely gets Nvidia.

Nvidia is part of the Rethink Technology Portfolio and is a recommended buy.

Nintendo Switch Outsells Xbox One, PlayStation 4 in August

According to sales estimates by NPD Group as reported by Fortune, Nintendo (OTCPK:NTDOY) (OTCPK:NTDOF) Switch was the top selling console in August, beating Microsoft’s (NASDAQ:MSFT) Xbox One and Sony’s (NYSE:SNE) PlayStation 4. NPD also says that Switch has been the top selling console in four of the six months since its release in March.

The success of the Switch has been a boon for Nvidia, which has found a high volume customer for its Tegra X1 system on chip. During Nintendo’s second quarter, it reported unit sales of Switch of 1.97 million, which I estimate generated about $50 million in revenue for Nvidia.

What I find interesting about Switch is not so much the near-term revenue but the long-term potential. The Tegra X1 is not by any means Nvidia’s most advanced SOC. It uses a Maxwell generation (the one before Pascal) GPU section combined with relatively low performance ARM Cortex A57 cores. It’s also fabricated on a non-FinFET 20 nm TSMC (NYSE:TSM) process.

Since then, Nvidia has created Parker, which features a 256 core Pascal GPU section, 2 custom designed “Denver” 64 bit ARM CPU cores, and is fabricated on TSMC’s 16 nm FinFET process. And soon Nvidia will unleash its Xavier SOC, which features Volta GPU cores, including Tensor Cores for AI acceleration.

I fully expect Nintendo to create upgraded, more capable versions of Switch as time goes by, using the Parker and Xavier SOCs as they become more affordable. Think what a Xavier enabled Switch would be like. Users would be able to play true, AI enabled games.

Why the App Economy beat the Silicon Economy

The pronouncements of C.J. Muse may fly in the face of conventional wisdom that has Nvidia succumb to the onslaught of ASIC machine learning accelerator chips. I frankly think the ASIC threat is overrated, although my arguments are largely based on history and analogy.

Yes, one can build an ASIC that is more energy efficient at performing a given AI related task, such as inference processing. Some assume that’s game over for Nvidia. These people should ask themselves why ASICs aren’t used more widely.

ASICs were really the first integrated circuits, created before the microprocessor was invented. Yet microprocessors quickly took over in general purpose computing despite the fact that even then an ASIC would have been faster and more energy efficient.

Let’s imagine an alternative universe in which microprocessors didn’t become dominant. Instead, special purpose ASICs were developed for each computing task one might want to perform, such as running a spreadsheet or sending email.

One can imagine a sort of “Silicon Economy” in place of today’s App Economy, in which people bought little slivers of silicon the way we buy apps. Instead of computers, there would be some form of universal host device into which the silicon “apps” would be inserted.

These little pieces of silicon would perform their functions faster and more efficiently than any general purpose microprocessor fabricated with equivalent processes. It’s all perfectly feasible, but it never happened. And the reason it never happened is because the Silicon Economy is grossly inefficient compared to the microprocessor hosted App Economy.

Programmable, general purpose processors, whether CPU or GPU, have profound economic advantages compared to the ASIC. Their main advantages are in economies of scale. With benefit of software, the same processor, produced by the millions, can perform myriad different functions. Likewise, software, once written and tested, can be deployed to millions of processors.

People who focus on the superiority of ASICs seem to forget that we’re still in the early stages of the AI revolution. We’re still developing new algorithms and software. Probably, we always will. Researchers and software developers still need programmable platforms.

Certain tasks such as Tensor Processing have emerged that benefit from hardware acceleration. The development of ASICs to accelerate these functions does not herald the end of programmable platforms. As often happened in the past, when a mathematics function or other function could benefit from hardware acceleration, that hardware was incorporated into the processor. Thus, today, we have hardware acceleration built into modern microprocessors for many tasks, including camera and video processing, memory access and video output, vector extensions, etc.

The modern SOC is a polyglot of all manner of different processors and controllers. Nvidia’s incorporation of Tensor Cores into its latest Volta GPUs is just the latest example of special purpose hardware being co-opted into a programmable processor. Likewise, those same Volta GPU cores, including Tensor Cores, have been included into one of the most potent ARM SOCs that will soon reach market, Nvidia’s Xavier.

Disclosure: I am/we are long NVDA, TSM, AAPL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Editor’s Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.

Leave a Reply