Investing
Google says its AI supercomputer is faster, greener than Nvidia
© Reuters. FILE PHOTO: The Google logo is pictured atop an office building in Irvine, California, U.S. August 7, 2017. REUTERS/Mike Blake
By Stephen Nellis
(Reuters) -Alphabet Inc’s Google (NASDAQ:) on Tuesday released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia (NASDAQ:) Corp.
Google has designed its own custom chip called the Tensor Processing Unit, or TPU. It uses those chips for more than 90% of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful at tasks like responding to queries with human-like text or generating images.
The Google TPU is now in its fourth generation. Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines.
Improving these connections has become a key point of competition among companies that build AI supercomputers because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, meaning they are far too large to store on a single chip.
The models must instead be split across thousands of chips, which must then work together for weeks or more to train the model. Google’s PaLM model – its largest publicly disclosed language model to date – was trained by splitting it across two of the 4,000-chip supercomputers over 50 days.
Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains.
“Circuit switching makes it easy to route around failed components,” Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. “This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model.”
While Google is only now releasing details about its supercomputer, it has been online inside the company since 2020 in a data center in Mayes County, Oklahoma. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.
In the paper, Google said that for comparably sized systems, its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip that was on the market at the same time as the fourth-generation TPU.
An Nvidia spokesperson declined to comment.
Google said it did not compare its fourth-generation to Nvidia’s current flagship H100 chip because the H100 came to the market after Google’s chip and is made with newer technology.
Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”
Read the full article here
-
Investing3 days ago
This All-Access Pass to Learning Is Now $20 for Black Friday
-
Investing6 days ago
Are You Missing These Hidden Warning Signs When Hiring?
-
Passive Income3 days ago
How to Create a Routine That Balances Rest and Business Success
-
Make Money6 days ago
7 Common Things You Should Never Buy New
-
Side Hustles4 days ago
Apple Prepares a New AI-Powered Siri to Compete With ChatGPT
-
Side Hustles5 days ago
MIT Gives Free Tuition For Families Earning $200,000 or Less
-
Passive Income4 days ago
Customers Want More Than Just a Product — Here’s How to Keep Up
-
Investing6 days ago
Google faces call from DuckDuckGo for new EU probes into tech rule compliance By Reuters