Published By : 31 May 2018 | Published By : QYRESEARCH
NVidia has come up with an enormous box yesterday called the HGX-2, and it definitely is packed with raw power. This dream-come-true for geeks and others mainly is a cloud server that is extremely powerful in the context of providing energy to various systems.
More about HGX-2 Cloud Server Introduced by NVidia
The HGX-2 is developed mainly for facilitating high-performance computing with artificial intelligence-based processes, integrated into a single power-packed unit. This powerful device consists of 16x NVidia Tesla V100 GPUs, which is good enough for powering 2petaFLOPS for AI designed with low precision. Similarly, for medium and high precision, the server provides 250teraFLOPS and 125teraFlops for processing power, respectively. The device is available with half terabyte memory and is also equipped with 12 NVSwitches. These switches facilitate GPU to GPU communications at a speed of 300 GB per second. This communicative capacity has been doubled by the company after releasing HGX-1 last year.
According to Paresh Kharya, the group product manager for Nvidia’s Tesla data center products, the extremely high communication speed enables users to treat the collection of GPU’s as one single giant GPU. In this way, developers can access a highly massive computing power, as well as use half a terabyte of GPU memory in the form of a single memory block in their programs. However, just about anyone won’t be able to buy one if these boxes. Rather, NVidia is only selling them to resellers and other similar parties, who are further expected to sell the products to hyperscale data centers and cloud service providers. In this way, when the cloud resellers buy the product, they can lay their hands on just a single device characterized with utmost precision. With respect to developers, they can easily write programs by taking advantage of high efficiency offered.