The progress of Artificial Intelligence over the last few months has stunned the world, but it turns out that its creators might be just as surprised too.
Google has said that its Artificial Intelligence is teaching itself things that they themselves hadn’t programmed. In an interview with CBS’s 60 Minutes, a Google executive recounted how their AI system Bard learned Bengali without having ever been explicitly taught the language. These are called emergent properties, which are skills that AI appears to learn on its own without being explicitly trained on them.
“We discovered that with a very few amounts of prompting in Bengali, it can now translate all of Bengali,” said James Manyika, Google’s senior vice-president of technology and society. A clip showed Bard responding in both English and Bengali after it was asked questions in Bengali such as “What is the capital of West Bengal” and “What are the favourite pizza toppings in New York City”. Google hadn’t trained this particular AI program on Bengali, but it could not only respond in the language, but also translate text from English to Bengali. “All of a sudden we now have a research effort in which we’re now trying to get to a thousand languages,” Manyika added.
“There is an aspect of this which we in the field call a ‘black box’,” said Google CEO Sundar Pichai. “You don’t fully understand. And you can’t quite tell why it said this. (But) we have some ideas. Our ability to better understand (this behaviour) gets better over time. But that’s where the state of the art is (right now),” he added.
The interviewer then asked Pichai why they’d let this technology loose on society without themselves fully understanding how it works, to which Pichai had a clever response — he quipped that they didn’t fully understand how the human mind worked either.
Understanding Bengali is just one of the emergent properties that AI systems have displayed over the last few months. AI systems have, without ever being trained to do so, been able to solve mazes, understand jumbled up words, invent whole new boardgames, been able to identify authors based on a few sentences of text, and even debug code that they themselves wrote. Large Language Models like Bard and ChatGPT weren’t taught any of this — they were trained on just a massive amount of text, and now seem to have these capabilities that even their developers hadn’t quite anticipated. It’s hard to say how advanced these skills might end up being, but given the pace at which AI is developing, humanity — and AI researchers — might be in for some interesting new surprises in the coming years.