1. Home >
  2. Defense

Supercomputer stagnation: New list of the world's fastest computers casts shadow over exascale by 2020

Yesterday, Top 500 released the updated list of the world's fastest supercomputers, and it revealed a rather worrying trend: Supercomputer performance is slowing down, rather than speeding up. When most of the world's computing superpowers have announced their intentions to create exascale (1000 petaflops) supercomputers by 2020, this would appear to be a bit of a problem.
By Sebastian Anthony
Our glorious leader, Sebastian Anthony, violating Watson at IBM Research

Yesterday, Top 500 released the updated list of the world's fastest supercomputers, and it revealed a rather worrying trend: Supercomputer performance gains are slowing down, rather than speeding up. When most of the world's computing superpowers have announced their intentions to create exascale (1000 petaflops) supercomputers by 2020, this would appear to be a bit of a problem.

The Top 500 list(Opens in a new window) is updated twice a year, in June and November. While the combined performance of all 500 supercomputers is still up from November 2013, the trend is a lot flatter than the long-term trend from the last four years or so. In the graph below, you can see how the latest data point (the blue dot) is almost fully below the trend line. You can also see that, also unusually for the last few years, one supercomputer has stuck to the number-one spot for the last 18 months (the Chinese Tianhe-2). And finally, the yellow dots -- which plots the performance of the 500th-fastest supercomputer over the last 20 years -- is also starting to dip below the long-term trend.

Top 500 supercomputer performance over the last 20 yearsTop 500 supercomputer performance over the last 20 years

If we look at the bigger picture, all of these trends are caused by one primary factor: Consolidation of power. Rather than everyone and their mother having a supercomputer, there has been a definite shift towards fewer, more powerful computers. Because our demand for faster supercomputers is increasing faster than the combined might of Moore's law and Dennard scaling, the only way to keep up is to build disproportionately bigger and better machines. The sum performance of the Top 500 has continued to grow around ~15% the last few years, while a PC bought today is probably only ~5% faster than the one you bought last year. It is getting harder and harder to "beat" the trend, let alone stick to it, which is why we are seeing a larger push towards HPC-specific accelerators (Nvidia Tesla and Intel Xeon Phi), and exotic cooling methods.

Top 500 chip technology: Intel rulesTop 500 supercomputers, broken down by CPU maker

There are some other interesting trends from the June 2014 Top 500 list, too. 62 (up from 53) supercomputers on the list now use accelerators/coprocessors, with no gain for Radeon, Xeon Phi gaining four new systems, and Tesla gaining six. On the CPU front, Intel increased its share of the Top 500 from 82% to 85%, with IBM Power holding at 8% and AMD Opteron losing ground. HP still has the lead in the total number of systems in the Top 500 (182), but IBM and Cray still dominate the top 10 with four and three installations respectively.

To circle back, though, the big story is that it's getting progressively harder to pump up the petaflops. Due to the consolidation that we discussed earlier, we are still going to see some huge supercomputers from China, Japan, USA, and Europe -- but they will take progressively longer to go from the drawing board to installation to peak performance. It's no longer just a matter of whacking thousands of CPUs into hundreds of cabinets and plugging them into a medium-sized power station. To reach 100 petaflops (the next couple of years) and then 1000 petaflops/exascale (2018-2020), it's going to require a seriously holistic approach that perfectly marries hardware, software, and interconnect.

Tianhe-2, with the lights onTianhe-2, the world's fastest supercomputer. 32,000 Ivy Bridge Xeon chips, 48,000 (!) Xeon Phi accelerators, for a grand total of 3,120,000 compute cores. It has a theoretical max performance of 55 petaflops, but actual benchmarked Linpack performance of 33.86 petaflops.

Whether we'll actually get to exascale by 2020, I think the jury's still out. While multi-core accelerators like Xeon Phi and Tesla have given the HPC industry a much-needed boost, it was a one-time thing; there's no evidence that that they'll push us all the way towards 1 exaflop -- they're still hardware, and thus ultimately beholden to the same scaling and power issues that the entire computing industry faces. Our hardware writer, Joel Hruska, has a great story on the difficulties that lie in wait on the path to exascale if you want to find out more.

Here's hoping we get there, though. There's a whole raft of scientific applications that will open up when we get there, apparently, including accurate simulation of the human brain, and processing data from world's largest telescope.

Tagged In

Hpc Exascale Xeon Phi Top500 Usa

More from Defense

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up