We painted ourselves in another corner with artificial intelligence. We’re finally starting to break through the utility barrier, but we’re running up against the limits of our ability to responsibly meet the huge energy demands of our machines.
At the current rate of growth, it looks like we’ll have to turn Earth into Coruscant if we’re to continue spending unfathomable amounts of energy training systems like GPT-3.
The problem: Simply put, AI takes too much time and energy to train. A layman can imagine a bunch of code on a laptop screen when thinking about the development of AI, but the truth is that many of the systems we use today were trained on massive GPU networks, supercomputers or both. We are talking about incredible power. And, worse, it takes a long time to train AI.
The reason AI is so good at things it’s good at, like image recognition or natural language processing, is that it basically does the same thing over and over again, making tiny changes. every time, until she gets it right. But we’re not talking about running a few simulations. Training a robust artificial intelligence system can take hundreds or even thousands of hours.
One expert estimated that GPT-3, a natural language processing system created by OpenAI, would cost around $ 4.6 million for training. But this presupposes ad hoc training. And very, very few powerful AI systems are trained all at once. In fact, the total expense required for GPT-3 to spit out impressively consistent gibberish is likely to be in the hundreds of millions.
GPT-3 is one of the high-end abusers, but there are countless AI systems sucking up grossly disproportionate amounts of energy compared to standard computational models.
The problem? If AI is the future, under the current paradigm of energy sucking, the future will not be green. And that can mean that we simply won’t have a future.
The solution: Quantum computing.
An international team of researchers, including scientists from the University of Vienna, MIT, Austria and New York, recently published research demonstrating “quantum acceleration” in a hybrid artificial intelligence system.
In other words: they have succeeded in harnessing quantum mechanics to allow AI to find more than one solution at a time. This, of course, speeds up the training process.
According to the team’s article:
The crucial question for practical applications is how quickly agents learn. Although various studies have used quantum mechanics to speed up agent decision making, a reduction in learning time has not yet been demonstrated.
Here we present a reinforcement learning experience in which the learning process of an agent is accelerated using a quantum communication channel with the environment. We further show that the combination of this scenario with classical communication allows to evaluate this improvement and allows an optimal control of the learning progress.
How? ‘Or’ What?
This is the coolest part. They ran 10,000 models through 165 experiments to determine how they worked using classical AI and how they worked when augmented with special quantum chips.
And by special, that is, do you know how classical processors work through the manipulation of electricity? The quantum chips used by the team were nanophotonic, meaning they use light instead of electricity.
The gist of the operation is that under the circumstances where classic AI pulls away by solving very difficult problems (think: supercomputer problems), they found the quantum-hybrid system to outperform standard models.
Interestingly, when faced with less difficult challenges, the researchers saw no improvement in performance. It looks like you need to put it into fifth gear before you launch the quantum turbocharger.
There is still a long way to go before we can deploy the old “mission accomplished” banner. The team’s work wasn’t the solution we were ultimately aiming for, but rather a small-scale model of how it might work once we figure out how to apply their techniques to bigger, real problems.
You can read the entire document here on nature.
H / t: Shelly Fan, Singularity Hub
Published March 17, 2021 – 19:41 UTC