At the International Supercomputing Conference (ISC), which takes place this week in Frankfurt, Nvidia finally unveiled the PCIe version of its largest chip, the GP100. This is not the rumored GP102 chip and confirms words spoken by Jen-Hsun Huang, co-founder and CEO of Nvidia Corporation – when he said that the company ‘taped out all the Pascals’: GP100, GP104 and GP106. The GP100-based Tesla P100 is a quite long dual-slot card, which rivals dual-GPU Tesla K80 in its length. The board features lower clock for both GPU and the HBM2 memory, meaning only the Nvidia NVLink-based daughterboards will feature GP100 chip in its full performance
In a way, the 2016 GPU Technology Conference represented a ‘coming of age’ for Nvidia, where the company finally established their first proprietary standard that gained immediate industry traction with none other than IBM. The NVLink interconnect, and the Mezzanine connector represent the first custom interfaces (outside of BGA packaging for their silicon) Nvidia designed. Given that IBM’s OpenPOWER conference is taking place at the same time as GTC, we searched for more details about the Mezzanine connector and the NVLink itself, and stumbled on quite an interesting amount of details. First and foremost, ever Pascal (Tesla P100) and Volta (Tesla V100) product that utilizes the NVLink
At the 2016 GPU Technology Conference, Nvidia finally unveiled the Pascal GPU architecture. Perhaps the most interesting aspect of the GPU aren’t the capabilities the Pascal architecture brings, but rather the first non-Intel driven high-end bandwidth interface since AMD launched HyperTransport in 2001. NVLink standard launched in 2014, when IBM announced its tie up with Nvidia to bring the high-speed interconnect to the market. The goal of NVLink is to remove its future GPU architectures from the dependencies of PCI Express, and achieve maximum bandwidth. If NVLink was replaced with 100% PCIe lanes, the design simply would not be as efficient in terms of lines needed, and would
The Department of Energy has announced that they will be awarding $425 million in grants to build 100+ petaflop supercomputers using IBM and Nvidia hardware
As GPUs get more powerful, a better solution to bridge the connectivity gap with the CPU is needed. Might AMD have the solution?