Google’s World Dominating Supercomputer
Everyone is talking about the Google Compute Engine announced at Google I/O in June. Many are focusing on the cost benefits, the threats to Amazon EC2 or Microsoft’s Azure, and complaining about only using Linux VMs. But let’s stop and think for a minute about the sheer power offered by the Google Computer Engine.
Experts estimate that during the demo for Google Compute Engine, when 600,000 cores tackled a genome research issue, we observed the third largest supercomputer. It was only for a moment though, when they had the cores linked together by their Exacycle technology. As impressive as that was, Google is really just getting started.
No one knows exactly how many physical servers Google has. But there are people out there going to great lengths to find out. This post has a great description of the calculations done to determine a good estimate of Google’s computing resources. The numbers we’re using come directly from those calculations. Now, getting back to the awesomeness of Google’s computing power...
Currently Google has an estimated 1.7 million physical servers. By early 2013 they will have about 2.3 million (several new datacenters will be completed then).
Those are huge numbers that are hard to comprehend even for the geekiest geek. But it gets even more astounding when you discover how much power that really is. For comparison, let’s look at the world’s most powerful supercomputer as determined by the Top 500: IBM’s Sequoia supercomputer. It has 1,572,864 cores and runs at 16.32 Petaflops.
If Google were to link ALL its computing resources together (as it did the 600,000 cores at the demo) it would have 13.6 million cores. About 21 times more than today’s largest supercomputer.
Next year it would approach 19.2 million cores. That is 30 times bigger than IBM’s Sequoia.
Google’s Compute Engine could dominate the world of supercomputers in terms of cores.
Processing power in terms of petaflops is harder to estimate with so many unknowns. The demo is estimated to have run at about four Petaflops. If that number is correct, and the performance scales linearly then the 13.6 million cores would process at about 22 petaflops. Well beyond any supercomputer today.
Realistically the performance won’t scale linearly. There will be losses as more cores are added due to the overhead in management and physical distribution. But it’s still likely that even in terms of petaflops Google’s entire Compute Engine could blow away any supercomputer in existence today.
As exciting as that concept is for computer geeks, it is even more impressive for two reasons:
- It’s privately owned. Most supercomputers are at government facilities. Using them comes with restrictions and caveats galore.
- Anyone can tap Google’s supercomputing power as long as they can pay the bill.
These two facts open up a world of possibilities to researchers that need that kind of power.
Why should average people even care about that?
Supercomputers have already played huge part in how doctors diagnose and treat cancers - thanks to gene sequencing. Weather predictions are notoriously difficult due to all the different variables involved and the amount of data needed. Improving predictions for particularly large damaging storms like hurricanes could save many lives.
And scientist studying the universe using molecular dynamics (how atoms and molecules interact) might discover the secrets of the universe that have eluded them. What they learn could fundamentally change our lives in ways we can’t hope to predict.
Are you excited about the possibilities? Or could you care less because you don’t need a supercomputer?
Archives:
- April 2022 (1)
- April 2021 (1)
- February 2021 (1)
- January 2021 (2)
- December 2020 (1)
- January 2020 (2)
- October 2019 (1)
- September 2019 (1)
- August 2019 (1)
- July 2019 (1)