Last year, Micron collaborated with NVIDIA and brought forward GDDR5X memory, with 2.5 GHz QDR clock i.e. “10Gbps per pin” achieving record bandwidth per pin. With GTX 1080 Ti and TITAN Xp, Micron and NVIDIA went 10% up and reached 11Gbps, with some lucky owners able to reach 3GHz QDR i.e. “12Gbps per pin”. That all is set to change with SK Hynix launching the GDDR6 memory. Debuting on AMD (and NVIDIA) graphics cards in about six-to-eight months from now, GDDR6 replaces GDDR5 and GDDR5X, bringing great times for improvments for GPUs and FPGAs. SK Hynix introduced the world’s fastest 8Gb (i.e. 1GB) Graphics DDR6 DRAM chip. Its operating
NVIDIA GeForce GTX 1080 Ti and FCAT VR Are Coming
After several months of anticipation and speculation, NVIDIA finally unveiled the “Ti” version of GeForce GTX 1080. First used on a GeForce 2 Ti some 17 years ago, a monstrous new graphics card that outperforms even the much desired Titan X (Pascal). While the launch was not a surprise thanks to a heap of leaks, its release window and price caught many by surprise; “next week” and (only) $699. The “ultimate GeForce” GPU, as CEO Jen-Hsun Huang called it, will offer up to 1.6GHz boost clocks and an special “OC” clock of 2GHz, the company said during the launch event in San Francisco. The GeForce GTX
New NVIDIA Quadro Family Plans to Heavily Monetize Pascal GPUs
NVIDIA’s scenario about the GeForce / Quadro / Tesla line-up experienced a lot of turnover over the past couple of years. The sequence of “launch as GeForce, downclock as Tesla, optimize and launch as Quadro,” changed into “launch as Tesla, optimize as GeForce and be reliable as Quadro”. With Pascal, story turned to be almost the same. NVIDIA introduced GP100 as Tesla in April 2016, followed with GP102 chip as Titan X (no longer branded as GeForce), Quadro P6000 and Tesla P40. At the same time, the GP104/106/107 did not experience the same sequence, with only GP104 debuting as Quadro P5000 and Tesla P40. Second day of
16nm MSI GeForce GTX 1080 Leak Ahead of Computex Taipei 2016
Given that we won’t be seeing any high end GPU hardware until the first quarter 2017 (HBM2-powered AMD Vega 10, Nvidia Pascal GP100), the focus for 2016 will be on the mainstream cards. The shift from 28nm to 16nm (Nvidia) and 14nm (AMD) forced the companies to adopt a conservative approach and focus on entry-level and mainstream silicon, rather than the “highest of all ends”. While Nvidia did launch its 15 billion transistor silicon named GP100 i.e. Tesla P100 at the Nvidia GPU Technology Conference, Jen-Hsun Huang did state that real volume shipments will only start in the first quarter of 2017, roughly the same time
HBM2 Will Revolutionize Your Computer
Even though HBM (High Bandwidth Memory) standard only launched last June in the form of AMD’s Fiji GPU, that memory was considered a ‘trial run’ for HBM2 – a memory standard which is here to stay. Launching in mid-2016 with AMD Polaris and NVIDIA Pascal, HBM2 memory standard will redefine computing as we know it. There are several memory standards which want to replace DDR and GDDR memory standards, including Intel-Micron 3D XPoint (pronounced: Cross Point) Optane memory – but HBM looks to have the widest support. If we compare this to HBM2, it had 1GB capacity and offered 0.5 Gbps bandwidth in 4-Cube configuration for a
GDDR5X Memory Shows Better Than Expected Results
2016 will be marked with the arrival of two memory standards, which should spread across the mainstream and high-end / enthusiast line-up like fire. First, we have the HBM2, an improved version of HBM memory which debuted (and so far, only ships inside) with AMD R9 Fury family of cards. HBM2 promises a four times increase in capacity and double the memory bandwith – meaning a single card can go from 4GB and 512GB/s to 16GB and 1TB/s. Given the low volume of HBM and HBM2 memory, those two will probably remain only on enthusiast graphics cards, such as recently renamed Greenland, high-end Polaris graphics processor from AMD