Google has created a bit of a flutter by saying that its latest AI chips were as much as 1.7 times as fast as Nvidia’s, and they have found a prominent backer.
The CEO of Stability AI, which has created products such as Stable Diffusion, has said that Google’s TPUv4’s are “really good” and in some ways “superior” to Nvidia’s. “Google TPUv4s are really good & what’s particularly interesting is the interconnect fabric is superior to NVIDIA’s. As the v5s start hitting it will be really interesting, v4s came out internally in 2020! We will ramp our usage,” he tweeted.
In a scientific paper released this week, researchers at Google claimed that a supercomputer powered by its latest tensor processing unit (TPU) — a processor specifically designed to train AI models — outperformed a system powered by comparable Nvidia chips. Google said that in its tests, its 4th generation TPU, which the company uses to train its own AI systems, were as much as 1.7 times faster and 1.9 times more power efficient than Nvidia’s comparable A100 Tensor Core GPU.
TPUs (Tensor Processing Units) are an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google’s own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
The development shows how deeply Google is embedded in the entire deep learning ecosystem. OpenAI is currently getting all the attention, but Google not only has its own LLM in Bard, but also has developed TensorFlow, a language that’s popularly used for Machine Learning, and also is creating cutting-edge hardware for these LLMs to run on. And if Google’s TPUs can actually prove more popular than Nvidia’s GPUS, Google could gain a significant leg-up in the battle for AI supremacy that’s almost certain to play out between the big tech companies in the coming years.