GPUs Power Five of World’s Top Seven Supercomputers.

The main 10 echelon of the recently printed Top500 list gloats three intense new frameworks with one regular motor: the Nvidia Volta V100 broadly useful designs processor. With the rollout of the 50th Top500 posting in Frankfurt toward the beginning of today, Nvidia featured its part in controlling the new number one framework, Summit at Oak Ridge National Lab; the new number three framework, Sierra at Livermore Lab, and Japan's new quickest supercomputer, the number-five machine, ABCI. Nvidia GPUs are likewise in the engine of the world's speediest mechanical supercomputer, Italian vitality organization Eni's HPC4 framework, which enters the rundown at number 13.

Appearing at Oak Ridge National Lab prior this month, the new number-one champ Summit accomplished 122.30 petaflops on its Linpack accommodation, recovering the U.S lead at the highest priority on the rundown and dislodging China's Sunway TaihuLight, now in second place with 93 Linpack petaflops (out of a potential 125 hypothetical pinnacle petaflops). The benchmarked Summit is spec'd at 187.66 petaflops top, putting its Linpack productivity at 65 percent. You'll take note of that the Rpeak is 28 petaflops short of the full form included 4,608 IBM Power9 hubs, which at 7.8 teraflops for each V100 GPU turns out to 215.65 pinnacle petaflops in view of the GPUs alone (the Power9s include approximately 5 percent more tumbles). At the point that we needed to hand it over, that was as huge as we could run, Buddy Bland, venture chief of the Oak Ridge Leadership Computing Facility, told HPCwire, and it will keep on getting better.


Sierra, based at the Lawrence Livermore National Laboratory and bought under a similar CORAL RFP as Summit, was to some degree an unexpected contestant in the third position with 71.61 Linpack petaflops out of 119.19 Rpeak petaflops, conveyed utilizing 17,280 GPUs. That is a Linpack proficiency of 60 percent. The two machines were worked by IBM in a joint effort with Nvidia and Mellanox for the United States Department of Energy, however, Sierra's arrangement was running half a month behind Summit, so it was not sure (from our viewpoint in any event) that it would be benchmarked in time for this rundown establishment. The IBM/Nvidia/Mellanox CORAL machines Summit and Sierra likewise took the initial two spots on the HPCG benchmark. On the Green500, there was a vast spread between the machines with the six-GPU hub Summit at number five and four-GPU hub Sierra positioned 276th.


In fifth place, Japan's speediest framework AI Bridging Cloud Infrastructure (ABCI) conveys 19.6 petaflops of execution (out of a pinnacle of 32.58 petaflops) utilizing 4,352 V100 GPUs. The Fujitsu-made supercomputer is introduced in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) at the Kashiwa II grounds of the University of Tokyo. (See our prior scope here. Nvidia likewise controls two existing best ten machines: the Cray XC50 Piz Daint Europe's speediest, conveyed at the Swiss National Supercomputing Center (CSCS) 
 and the past U.S. title-holder Titan, introduced at Oak Ridge National Lab in 2012 as a move up to Jaguar. Conveying 19.5 petaflops of Linpack execution utilizing 5,320 P100 GPUs, Piz Daint drops to 6th place from its past third-put position. Titan, the Cray XK7 with Nvidia K20x's that was at one time the world quickest supercomputer (with 17.59 petaflops of Linpack execution) fell two spots to number seven.

The new frameworks mirror the more extensive move to quickening agents in the Top500 list, said Nvidia, describing the machines as AI supercomputers extraordinarily equipped for preparing both customary HPC reproductions and progressive new AI workloads. GPUs currently control five out of the world's seven speediest frameworks and also 17 of the 20 most vitality proficient frameworks on the new Green500 list, the organization commented, including that the greater part of registering execution added to the Top500 list originates from Nvidia GPUs. The Volta Tensor Core GPU makes it conceivable to consolidate recreation with the intensity of AI to propel science, discover cures for malady and grow new types of vitality, said CUDA designer Ian Buck.


The same GPUs are likewise inside HPC4, the new participant at number 13 that is the world's most great freely declared business framework, giving 12.21 Linpack petaflops (out of a potential hypothetical 18.62 petaflops) for Italian vitality organization Eni's oil and gas investigation exercises. Worked by Hewlett Packard Enterprise (HPE), the group includes 1,600 ProLiant DL380 hubs, each outfitted with two Intel 24-center Skylake processors and two Nvidia Tesla P100 GPU quickening agents. The most recent Top500 report incorporates 110 frameworks with some way of quickening agent as well as co-processor innovation, up from 101 six months back. 98 are furnished with Nvidia chips, seven frameworks use Intel Xeon Phi (coprocessor) innovation and four are utilizing PEZY innovation. Two frameworks (positioned 52 and 252) utilize a blend of Nvidia and Intel Xeon Phi quickening agents/coprocessors. The recently redesigned Tianhe-2a (now in the fourth position with 61.44 petaflops up from 33.86 petaflops), introduced at the National Super Computer Center in Guangzhou, utilizes custom-assembled Matrox-2000 quickening agents. 19 frameworks presently utilize Xeon Phi as the principle preparing unit.


The current year's Top500 list speaks to an unmistakable move toward frameworks that help both HPC and AI registering, said Jack Dongarra, teacher at the University of Tennessee and Oak Ridge National Laboratory and Top500 creator. Quickening agents, for example, GPUs, are basic to convey this ability at the execution and effectiveness targets requested by the supercomputing network.