By Tiernan Ray
A year and a half ago, we ran a cover story in Barron’s print magazine, titled “Watch Out Intel, Here Comes Facebook,” the premise of which was that as Moore’s Law ceases to deliver exactly as expected, there is a risk to traditional chip vendors from new kinds of chip efforts, including, potentially, chips developed by the large cloud computing providers such as Alphabet’s (GOOGL) Google unit, and Facebook.
Turns out, Google had already had a chip running in its data centers at the time of our writing, although the company declined to comment for our article. Its work on a chip was about the worst-kept secret in Silicon Valley, even if no one knew all the details of the part.
Flash forward, and Google today is presenting a paper at a technical conference about how its chip, the “Tensor Processing Unit,” or TPU, first announced in May of last year, has been 15 to 30 times faster than today’s “contemporary server-class CPUs and GPUs,” by which it means the chips from Intel (INTC) and Nvidia (NVDA).
Rick Merritt with EE Times had a nice summary yesterday.
As the abstract of the paper, by David Patterson, a professor of computer architecture at U.C. Berkeley, states it, there’s a direct link between the “end” of Moore’s Law and custom hardware, as we said in the cover story: “With the ending of Moore’s Law, many computer architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware.”
I interviewed Patterson for the Barron’s cover story. He declined to say anything about the year he spent at Google, which obviously contributed to this custom chip. But among the things Patterson was willing to tell me was that the old rule of, say, 50% jumps in performance with each new process technology node in microprocessors had slowed to more like 20%, and that going forward, instead of constantly smaller transistors and more CPU “cores,” “It will take domain-specific processors.”
“Do just one thing really well,” Patterson told me. “Different designs on the chip for different things.”
What does this mean? It means that Intel’s efforts in its data center division, where it has dominated server computing, are at risk from continued efforts by Google, Facebook (FB), and others to design processor tuned to their own needs. It also means that despite how great Nvidia (NVDA) is doing in GPUs for machine learning, they face some of this same threat. So does Advanced Micro Devices (AMD).
Those interested in digging further may want to look at “RISC V,” the other project that Patterson is involved with. It makes it easier than ever for a small team to develop an instruction-set architecture tuned to tasks.