The inclusion of two new systems, as well as a new performance benchmark established by the Fugaku supercomputer, are notable advancements in the top ten.
Due to the inclusion of additional hardware to strengthen up the system, Fugaku increased its High Performance LINPACK (HPL) benchmark to 442 petaflops, up from 416 petaflops when it first appeared in June 2020 - a 6.4 percent performance boost.
The 57th edition of Top500 is determined using the HPL benchmark, which measures the performance of a dedicated system when solving a dense set of equations. Let's discuss these supercomputers in details.
Fugaku is a Fujitsu-built supercomputer that may be found at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. The system established a new world record of 442 petaflops on HPL thanks to its extra hardware, putting it three times ahead of the second-placed machine on the list.
Satoshi Matsuoka, the director of RIKEN, explained that the improvement stemmed from "finally being able to use the full machine rather than just a significant part of it. His team has been able to fine-tune the code for optimal performance since the June tournament. "I don't think we're going to be able to improve much more," Matsuoka remarked.(Here)
The combination of cutting-edge hardware and robust data subsystems is an evolution of ORNL's Titan supercomputer's hybrid CPU–GPU architecture and a significant step toward delivering the first U.S. exascale supercomputer—a system capable of a billion billion double precision floating point operations per second.
With a performance of 148.8 petaflops, this IBM-built machine at the Oak Ridge National Laboratory (ORNL) in Tennessee remains the fastest in the US. Each of Summit's 4,356 nodes houses two 22-core Power9 CPUs and six NVIDIA Tesla V100 GPUs.
Sierra, one of the world's fastest supercomputers, is being constructed for the Lawrence Livermore National Laboratory as the second Advanced Technology System for the National Nuclear Security Administration (NNSA). The technology gives nuclear weapons experts the computing capabilities they need to accomplish the NNSA's stockpile management duty through simulation rather than underground testing.
With a peak throughput of 125 petaFLOP/s, the IBM-built Sierra supercomputer delivers almost six times the sustained throughput and more than five times the sustained scalable scientific performance of Sequoia.
With a peak power usage of roughly 11 MW, this supercomputer, which combines IBM's Power 9 processors with NVIDIA's Volta graphics processing units, is nearly five times more energy efficient than Sequoia.
(Must read: Human-Computer Interaction(HCI): Importance and Applications)
Sunway TaihuLight, which is housed in China's National Supercomputing Center in Wuxi, previously held the top place for two years (2016-2017). However, it has since dropped in the rankings. It had been in third place the previous year, but has now dropped to fourth.
On the HPL benchmark, China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) achieved 93 petaflops. Sunway SW26010 CPUs are used solely.
The new NERSC supercomputer is now being installed (July 2021) in the center's location in Berkeley Lab's Shyh Wang Hall. The system is named after Saul Perlmutter, a Berkeley Lab astronomer who won the Nobel Prize in Physics in 2011 for his groundbreaking discovery that the pace at which the universe expands is speeding up.
Perlmutter is a heterogeneous system built on the HPE Cray "Shasta" architecture, having both GPU-accelerated and CPU-only nodes. It is expected to outperform NERSC's current flagship system, Cori, by three to four times.
Phase 1 of the system's installation, which comprises the system's GPU-accelerated nodes and scratch file system, is scheduled to be ready for early research missions in the summer of 2021. Later in 2021, Phase 2 will introduce CPU-only nodes. Perlmutter was able to attain a speed of 64.6 Pflop/s.
(Suggested blog: Application of computer vision)
In comparison to the 9-12 months it takes to build a normal supercomputer installation, the air-cooled Selene was built in a regular data centre in just three weeks. NVIDIA's first effort into DGX-based supercomputers was Saturn V, which was announced alongside the Volta GPU introduction in 2017.
NVIDIA Corp. deployed this NVIDIA DGX A100 SuperPOD in-house. It was ranked seventh in June but has since doubled in size, allowing it to climb to fifth place. AMD EPYC CPUs power the system, which is accelerated by NVIDIA's latest A100 GPUs. As a result of the update, Selene now has 63.4 petaflops on HPL.
The new Tianhe-2, developed by China's National University of Defense Technology (NUDT), will serve as an open platform for research and teaching, and is expected to become live by the end of the year.
It has 4,981,760 cores and achieves 61.4 Pflop/s thanks to Intel Xeon CPUs and NUDT's Matrix-2000 DSP accelerators. It's being used in China's National Supercomputer Center in Guangzhou.
The Tianhe-1A at the National Supercomputer Center in Tianjin, the 2A's sister system, debuted at number one on the Top500 list in November 2010 with 2.57 petaflops and ranked tenth in June 2013 with the same performance.
(Top read: OpenCV- Applications and Functions)
Booster Module for JUWELS – This BullSequana system is the most powerful in Europe, at 44.1 Pflop/s. It uses AMD EPYC CPUs with Nvidia A100 graphics cards for acceleration and a Mellanox HDR InfiniBand network, similar to the Selene system.
JUWELS is able to substantially increase the application boundaries of simulations thanks to its new booster module, and it also offers the strongest platform in Europe for the use of artificial intelligence (AI).
The system was created by Forschungszentrum Jülich, Atos, a worldwide leader in digital transformation with headquarters in France, ParTec, a Munich-based supercomputing firm, and NVIDIA, a company that specialises in accelerated computing platforms.(Here)
A Dell PowerEdge system deployed by the Italian business Eni S.p.A. comes in at No. 9. Using NVIDIA Tesla V100 accelerators and a Mellanox HDR InfiniBand network, it reaches a speed of 35.5 Pflop/s. High Performance Computing - Layer 5 is its full name.
It has been recognised as the world's second-best computer for energy savings when experimental computers are excluded and only genuine supercomputer systems with a consumption level of above 1MW are considered. It can execute over twenty billion operations per second with a single watt of energy.
Because of this computer's high performance, it can process subsoil data using highly sophisticated in-house algorithms. HPC5 receives geophysical and seismic data from all over the world and processes it.
Using this information, the system creates very detailed subsoil models, which may be used to identify what lies several kilometres beneath the surface: this is how Zohr, the Mediterranean's biggest gas field, was identified.
Frontera is a Dell C6420 system that was placed at the University of Texas' Texas Advanced Computing Center and is presently ranked No. 10. It used 448,448 Intel Xeon processors to produce 23.5 Pflop/s.
Hundreds of 28-core 2nd Gen Xeon Scalable (Cascade Lake) processors in Dell EMC PowerEdge servers, combined with Nvidia nodes for single-precision computing, perform Frontera's heavy computational work. The design of the processors is based on Intel's Advanced Vector Extensions 512 (AVX-512), a set of instructions that allows for twice as many FLOPS per clock as the previous generation.
Frontera uses Mellanox HDR and HDR-100 interconnects to transport data at rates of up to 200Gbps per connection between the switches that connect its 8,008 nodes, with Dell EMC providing water and oil cooling from system integration firm CoolIT and Green Revolution Cooling. Each node is expected to consume about 65 kilowatts of electricity, with roughly a third of that coming from TACC's wind power credits and output, as well as solar power.
Ending to our discussion, supercomputers were first employed in national security applications such as nuclear weapons design and encryption. Today, the aerospace, petroleum, and automobile sectors all use them on a regular basis. Furthermore, supercomputers have found widespread use in engineering and scientific study, including as investigations of the structure of subatomic particles and the creation and nature of the cosmos.
5 Factors Influencing Consumer BehaviorREAD MORE
Elasticity of Demand and its TypesREAD MORE
What is PESTLE Analysis? Everything you need to know about itREAD MORE
An Overview of Descriptive AnalysisREAD MORE
What is Managerial Economics? Definition, Types, Nature, Principles, and ScopeREAD MORE
5 Factors Affecting the Price Elasticity of Demand (PED)READ MORE
Dijkstra’s Algorithm: The Shortest Path AlgorithmREAD MORE
6 Major Branches of Artificial Intelligence (AI)READ MORE
Scope of Managerial EconomicsREAD MORE
7 Types of Statistical Analysis: Definition and ExplanationREAD MORE