I have a long piece in the current issue of the magazine about the way long-term trends in artificial intelligence are likely to cause mass unemployment in the future. Because of this focus, I included only a few brief examples of the current state of AI research along with details about how it works.
And it’s a good thing, since that stuff is obsolete already. For example, I wrote a sentence or two about Google’s DeepMind computer and how it was able to become the best Go player in the world years before anyone thought it could. But now it’s even better, and this has some lessons for us. Here is Christina Bonnington in Slate:
On Monday, researchers announced that Google’s project AutoML had successfully taught itself to program machine learning software on its own….On Wednesday, in a paper published in the journal Nature, DeepMind researchers revealed another remarkable achievement. The newest version of its Go-playing algorithm, dubbed AlphaGo Zero, was not only better than the original AlphaGo, which defeated the world’s best human player in May. This version had taught itself how to play the game. All on its own, given only the basic rules of the game. (The original, by comparison, learned from a database of 100,000 Go games.) According to Google’s researchers, AlphaGo Zero has achieved superhuman-level performance: It won 100–0 against its champion predecessor, AlphaGo.
….Early AlphaGo versions operated on 48 Google-built TPUs. AlphaGo Zero works on only four. It’s far more efficient and practical than its predecessors. Paired with AutoML’s ability to develop its own machine learning algorithms, this could seriously speed up the pace of DeepMind’s AI-related discoveries.
This highlights some the things I briefly mention in my article. We have a tendency to think of AI primarily in terms of raw hardware power, and there’s no question that this is important. Full AI will simply never be possible until we have cheap, energy-efficient computing platforms with roughly the computing power of the human brain.
But “computing power” is a combination of hardware and software. And hardware is a combination of CPU speed, custom chip design, and massive parallellism. In this case, the new AlphaGo machine has become a dozen times more efficient not because Intel has come out with a faster CPU, but via the use of better software, which is executed on Google’s custom Tensor Processing Unit chips.
This is why I’m so confident that computing power will continue to double every couple of years, just as it has for the past half century. Standard CPUs aren’t likely to keep doubling in raw speed, but they’ll get smaller, cheaper, and more energy efficient. Combine this with better algorithms, better use of parallellism, and custom AI processing chips (which are in their infancy right now), and effective computing power is likely to continue to grow exponentially.
We’re also in the infancy of making use of AI to help build better AI. Right now this is extremely limited, but that’s the way everything starts. In another decade, AI-assisted AI development is likely to be yet another factor keeping development on an exponential curve. This means it won’t be long before AI starts to get good at tasks that we currently pay human beings to do.
This is going to put a lot of people out of work starting in about a decade or so, and this won’t be a rerun of the Industrial Revolution. Millions of people will be out of work for good, since by definition, any new jobs created by the transition to AI can also be done by AI. This is something we should be thinking hard about. But as I was researching my article, I was disappointed that even now, when the future of AI seems to be barreling toward us in a way that’s hardly deniable anymore, very little thought is going into this.
So what can we do? I have a few ideas, but mostly I’m hoping that my piece sparks some more serious discussion. You can join in when it appears online.