Most AI developments in the recent past have been around AI systems doing what humans do, but much faster and for cheaper. But AI is now doing things that humans could themselves find it hard to do.
Google DeepMind says its AI has discovered a faster sorting algorithm. “We introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades. AlphaDev uncovered a faster algorithm for sorting, a method for ordering data,” DeepMind said in a blogpost. Google Deepmind says that their sort implementations led to improvements of up to 70% for sequences of a length of five and roughly 1.7% for sequences exceeding 250,000 elements.
Google DeepMind managed to achieve this by formulating the sorting problem as a single-player game for an AI to solve. The AI iterated over assembly level instructions, and looked to make them more efficient. The researchers found that the method worked, and the AI was able to optimize the assembly code to be better than what it currently was.
“We formulate the problem of discovering new, efficient sorting algorithms as a single-player game that we refer to as AssemblyGame,” DeepMind’s paper says. “In this game, the player selects a series of low-level CPU instructions, which we refer to as assembly instructions, to combine to yield a new and efficient sorting algorithm. To play the game, we introduce AlphaDev, a learning agent that is trained to search for correct and efficient algorithms,” the paper added. “Using AlphaDev, we have discovered fixed and variable sort algorithms from scratch that are both new and more efficient than the state-of-the-art human benchmarks,” the paper said.
Sorting is one of the most fundamental problems in computer science. It refers to ordering a list of numbers in ascending or descending order. There are several approaches to sorting, each with their own speeds, memory requirements, and other pros and cons. Sorting is perhaps used somewhere in the background of most computer programs people interact with on a daily basis, and these sorting algorithms haven’t changed much with time. As such, DeepMind’s announcement that its AI has managed to discover a faster sorting algorithm sounds like a big deal.
But not everyone’s convinced. A viral comment on Hacker News says that that DeepMind appears to have exaggerated the significance of its discovery. “As someone that knows a thing or two about sorting… bullshit,” user orlp said. “No new algorithms were uncovered, and the work here did not lead to the claimed improvements. They found a sequence of assembly that saves… one MOV. That’s it. And it’s not even novel, it’s simply an unrolled insertion sort on three elements. That their patch for libc++ is 70% faster for small inputs is only due to the library not having an efficient implementation with a *branchless* sorting network beforehand. Those are not novel either, they already exist, made by humans,” the comment said. “I’m happy for the researchers that the reinforcement learning approach worked, and that it gave good code. But the paper and surrounding press release is self-aggrandizing in both its results and impact,” the comment added.
Time will likely tell how significant this discovery is, but on the face of it, it seems pretty impressive: a credible organization like Google published a paper in a credible journal like Nature claiming that an Artificial Intelligence managed to sift through compiler code, and build a new algorithm that humans had been unable to discover. The implications of this are profound — once AI becomes smart enough, it might be able to simply iteratively improve its own performance, and lead us eventually to AGI. It’s still early days, but results like these lend credence to the idea that AGI probably isn’t an unattainable goal after all.
DeepMind