After several months of anticipation and speculation, NVIDIA finally unveiled the “Ti” version of GeForce GTX 1080. First used on a GeForce 2 Ti some 17 years ago, a monstrous new graphics card that outperforms even the much desired Titan X (Pascal). While the launch was not a surprise thanks to a heap of leaks, its release window and price caught many by surprise; “next week” and (only) $699. The “ultimate GeForce” GPU, as CEO Jen-Hsun Huang called it, will offer up to 1.6GHz boost clocks and an special “OC” clock of 2GHz, the company said during the launch event in San Francisco.
The GeForce GTX 1080 Ti is based on GP102 silicon, consisting out of 12 billion transistors which feature 28 enabled clusters for a grand total of 3,584 CUDA cores (two are disabled for better yield / power consumption). If that sounds familiar, it is because the specs mimic those in the mighty Titan X Pascal that the company announced in August.
From August to March, there were some changes, though. First and foremost, there is a new batch of GDDR5X or “G5X” memory, which ticks 10% higher and has a much higher headroom than the first generation, which debuted with the GeForce GTX 1080. GeForce GTX 1080 Ti features 11GB of that new GDDR5X memory rather than exotic and low yielding HBM2, which AMD is expected to use in Vega (whose release is continuously sliding down as a consequence of the memory selection). NVIDIA said it could squeeze out almost as much performance from GDDR5X for the non-mission critical applications.
Externally the card looks the same as the GeForce GTX 1080 and TITAN X, but the company said it redesigned both the cooling solution and power supply for the card. The card is rated at 220W TDP, which bests the vanilla GTX 1080’s 180W TDP but falls shy of the TITAN X Pascal’s 250 watts. The key difference, and what gives the 1080 Ti a slight edge over the Titan X, is its memory configuration. The 1080 Ti features a rather odd 11GB of GDDR5X memory, which is connected to a 352-bit memory interface and 88 ROPs instead of the 12GB of GDDR5X, 384-bit interface, and 96 ROPs of the Titan X. However, this move to reduce the number of traces from the GPU enabled more power and in turn – higher clocks.
When it comes to the subject of performance, NVIDIA’s own tests put the GTX 1080 Ti at roughly 35% faster than a standard GTX 1080, which makes it the “best” Ti card yet. The GeForce GTX 980 to GTX 980 Ti was about 25 percent faster, NVIDIA stated. The GeForce GTX 780 to GTX 780 Ti offered but an 18 percent increase in frame rates. Comparing it to the 1080, 1080 Ti should run at 5 degrees lower temperatures in full load. In addition, they (finally) removed DVI socket out of the card. However, they included a HDMI to DVI adapter. From my standpoint, the removal of DVI is the most welcome feature on a GeForce. Finally, you can build a single-slot card and if you’re into ultra-dense computing solutions, you could build seven or eight closely packed 1080 Ti’s to serve a iCafe or do some serious number crunching or rendering.
Besides the GeForce GTX 1080 Ti, NVIDIA also revealed a new benchmark, claimed to be the first true and easy way to accurately benchmark VR, called FCAT VR. Essentially, FCAT VR is a frame time analysis tool which hooks into the rendering pipeline grabbing performance metrics at a low level. It gathers information on total frame time, dropped frames and performance data on how the VR headset’s projection techniques are operating. You can just open CSV files that it outputs in an FCAT data analyzer and then all you need to do is find the information you need. Overall, a really good job by NVIDIA. Now all we need to do is to wait and see all of it in action.