Moore’s Law Is Dead. Long Live Huang’s Law.

Fight disinformation. Get a daily recap of the facts that matter. Sign up for the free Mother Jones newsletter.

In Part 2 of my great robot trilogy,¹ I talked a bit about the steady demise of Moore’s Law, which ever since 1965 has predicted that computing power doubles every 18 months or so. Obviously Moore’s Law was good for artificial intelligence, which requires vast computing power, and its death is just as obviously bad news. However, there are many ways to skin a cat, and raw, general computing power is only one of them. In the Wall Street Journal this weekend, Christopher Mims introduces us to a replacement for Moore’s Law:

I call it Huang’s Law, after Nvidia Corp. chief executive and co-founder Jensen Huang….Between November 2012 and this May, performance of Nvidia’s chips increased 317 times for an important class of AI calculations, says Bill Dally, chief scientist and senior vice president of research at Nvidia. On average, in other words, the performance of these chips more than doubled every year, a rate of progress that makes Moore’s Law pale in comparison.

Nvidia’s specialty has long been graphics processing units, or GPUs, which operate efficiently when there are many independent tasks to be done simultaneously. Central processing units, or CPUs, like the kind that Intel specializes in, are on the other hand much less efficient but better at executing a single, serial task very quickly. You can’t chop up every computing process so that it can be efficiently handled by a GPU, but for the ones you can—including many AI applications—you can perform it many times as fast while expending the same power.

Experts agree that the phenomenon I’ve labeled Huang’s Law is advancing at a blistering pace. However, its exact cadence can be difficult to nail down. The nonprofit Open AI says that, based on a classic AI image-recognition test, performance doubles roughly every year and a half. But it’s been a challenge even agreeing on the definition of “performance.” A consortium of researchers from Google, Baidu, Harvard, Stanford and practically every other major tech company are collaborating on an effort to better and more objectively measure it.

Another caveat for Huang’s Law is that it describes processing power that can’t be thrown at every application. Even in a stereotypically AI-centric task like autonomous driving, most of the code the system is running requires the CPU, says TuSimple’s Mr. Hou. Dr. Dally of Nvidia acknowledges this problem, and says that when engineers radically speed up one part of a calculation, whatever remains that can’t be sped up naturally becomes the bottleneck.

This is how we’re going to get the processing power needed to develop true AI. General purpose CPUs may be close to their ultimate limits, but AI is so important that it’s worth investing lots of money in specialized hardware (and software) for highly specific neural tasks. At the moment, we’re repurposing graphics chips, which turn out to be (a) well suited for many AI applications, and (b) still have a lot of headroom to deliver better and better performance. In time, we’ll develop chips that are even more specialized, something that Google is already working on.

So don’t cry for the death of Moore’s Law. It was great while it lasted, but its end is far from the end of high-power computing. AI is coming, and when it does its hardware foundations will most likely look nothing like today’s generic computers. Huang’s Law now rules our world.

¹Part 1 is here. Part 3 is still a couple of years away.

We Recommend


Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.


Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.