Since the first graphics processors that hardwired the basic display operations of displays like the NEC 7220 and Hitachi 63484 in the early 1980s, they were followed by the first PC cards – the IBM PGA – some 30 years ago, the need for dedicated graphics processing hardware has set in firmly at the high end of the PC landscape.
At that time it was 2D only, yet it still cost a couple of grand per adapter card: a price class that has seemingly kept to this day, if talking about professional graphics cards like the ones from Nvidia and AMD that are included in this roundup review.
After the loss of the original Silicon Graphics, as well as the other two major independent true OpenGL focused 3D professional GPU chip brands (3DLabs and E&S), which was a big loss in terms of features and capabilities of those processors, what we have today is the duopoly of Nvidia and AMD/ATI in this space. Sure, Intel’s Larabee was originally targeted at this same market, but, as we all know, failed and moved to the HPC area for pure compute, where it thrives now.
While DirectX, for better or worse, dominates the PC 3D graphics landscape, the inherently more reliable and precise OpenGL is the API of choice for most professional applications. And that’s where the difference between otherwise identical GPU dies on the consumer and professional card varieties comes in. The full OpenGL functionality enabling of the professional GPUs leads not only to, say, triple the OpenGL benchmark advantage, but also the proper OpenGL application operation necessary to pass all the expensive professional apps certification procedures and driver optimizations – one of the reasons, besides the margin aims, why those cards cost four to five times more than their consumer brethren with similar chips.
Nvidia Quadro vs AMD FirePro
OpenGL professional cards also have between two and four times more local memory than the consumer ones. For instance, the AMD Radeon R7 290X has 4 GB RAM, while its equivalent, the FirePro W9100, has a whopping 16 GB. The capability to drive two 8K displays, plus the allowance for larger in-memory compute jobs to use all those teraflops without slowing down to cross over PCIe, demands greater local memory. And yes, many professional 3D apps can readily make use of 4K and 8K resolutions right away today: whether it is 3D city modelling, or detailed engine assembly review, or complex molecular interaction simulations.
Those extra pixels do need extra horsepower to drive them, plus the extra memory. Game developers can also benefit from humongous local card memory, as they can optimize the game memory usage in advance for future consumer cards to arrive a few years later — way in advance.
In this roundup, we have the Quadro K2200 which has 4 GB VRAM, while K5200 and W8100 both have 8 GB VRAM. Note that W8100 has twice the memory bus with compared to K5200, at 512 bits vs 256 bits.
If relying on GPGPU computing, these cards offer an added advantage: their double-precision FP performance is usually fully enabled – not crippled as in their consumer twins. For instance, the otherwise same dies of the R7 290X and FirePro W8100 have 8 times difference in DP FP performance, and Nvidia’s GPU dies follow a similar path. The single precision FP is usually left full speed in both cases, though, as it affects gaming physics competitiveness on the consumer side.
As said, here we have a quick look at the two new OpenGL GPUs from Nvidia – Quadro K2200 and K5200 – as well as K5200’s head-on competitor from AMD, the FirePro W8100. To emphasise GPU performance variations over the base CPU speed influence, all cards were run on a standard 3.5 GHz quad-core Haswell Core i7-4770K platform with 8 GB RAM and Windows 7 Ultimate, running off an Intel enterprise SSD drive. The newest drivers as of August 22nd were used on all cards. The benchmark used was the most recent version of the sophisticated SPEC ViewPerf12 benchmark suite, which measures the performance range in a variety of pro apps and visualization options, as well as CineBench 15 OpenGL benchmark option, which focuses more on the card raw performance. Here are the results.
SPEC ViewPerf 12 results reflect not just the GPU graphics performance, but also the amount of memory available to locally store the dataset. Among the current OpenGL benchmark, this one is the closest to the actual application usage mix seen on professional 3D workstations.
As you can see, the scaling among the three Nvidia cards is almost exponential 1:2:4 scale, which kind of renders the first card obsolete, the K2000, as its overall card specs are similar to the K2200. Also, note that, despite the higher raw hardware specs (GPU and memory bandwidth), K5200 beats W8100 by an unusually wide margin in some apps of this test suite thanks to being Nvidia’s updated Kepler architecture and improved memory capacity and bandwidth. This is very likely because the K5200 makes a lot of improvements to memory performance (and capacity) and overall FLOPS performance over the K5000 (3TFLOPS vs 2TFLOPS) and can be directly noticeable in the professional benchmarks. The K5200 doubles memory capacity from 4GB to 8GB over the K5000, which also helps Nvidia become more competitive with AMD.
Nvidia’s Maxwell-based K2200 also performs quite well against the rest of the roundup, even beating AMD’s W8100 in one test (sw-03) but handily beating the old Kepler-based K2000. Because the K2000 and K2200 are the lowest end cards that Nvidia offers, the differences between architectures are more noticeable. If anything, we can see that AMD should be very worried about a potential Maxwell-based Quadro card from Nvidia if the K2200 improves performance as much as it does over the Kepler-based K2000.
Otherwise, we can see that the new K5200 from Nvidia mostly takes the cake in most of the benchmarks with the exception of three benchmarks, which indicates that AMD is still very competitive with Nvidia.
CineBench 15 OpenGL routine, commonly ran on the consumer GPUs as well, requires far less resources. However, even here, the full OpenGL performance and feature set of these cards beats their consumer brethren manifold:
As you can see here, the K2200, even though spec-wise closer to K2000 than to K5200, is much nearer to Quadro K5200 in performance. I feel Nvidia should retire the K2000, or at least massively reduce its price vs K2200, since it makes little sense to consider it otherwise. But it also means that the K2200 delivers a much better level of performance for essentially the same money that they charge for the K2000. The K2200 is proving to be a very good budget card for professional applications and that Maxwell is a massive improvement over Kepler.
Also, AMD W8100 has a slight performance advantage here over the K5200: the raw GPU computation and memory capability of the Hawaii GPU core come to shine here.
And here, you can see the GPU-Z screenshots of all the Nvidia entries – GPU-Z crashes on the AMD card, so unfortunately we couldn’t go far there, as you can see on the screenshot.
If you look at other, more general-purpose 3-D CAD apps, like the AutoCAD 2015 shown here, the picture may be a little different – literally. In the case of AutoCAD, the 3-D polygonal performance for wireframe and shaded models is far more important than complex textures and effects, which are still relatively rarely used in this software for interactive visualization. This means that even a low to mid range card, like Quadro K2200, has sufficient performance for most CAD jobs. I tested both K2200 and K5200 on my AutoCAD Kuala Lumpur model, with plenty of buildings but pure polygonal definition, and there was zero difference in responsiveness, both handling any 3D visualization operation in real time.
Worse, since DirectX is these days – like it or not – supported by many of these apps as well, this changes the equation, as consumer GPUs will run it just as well as the professional ones, at small fraction of the price. AutoCAD was, in fact, one of the first to accommodate that and, coupled with its relatively low requirements, it affects the justification for premium priced professional cards substantially.
On the other hand, many other apps and usage models do value the added benefits of OpenGL – especially those that run under Linux for performance, reliability and multi-core scaling reasons. OpenGL is the sole choice there. The trick, though, is to ensure that the OpenGL Linux driver is at least on the same level of quality as its Windows equivalent – something that Nvidia did well, but AMD still has a way to go.
So, at the end, how to justify purchasing one of these capable, but pricey, cards? At the end, it’s all about your application. If you design a tall building, or an oil rig, or a new-generation plane engine, both the value of your application and, especially, the value of your work and its end result product, will usually demand the total precision and performance guarantee for the underlying hardware running your job on your selected app. The certifications and tests done on all of these cards in a variety of systems prior to their launch go as far as possible in meeting those goals.
This post originally appeared on Bright Side of News’* sister site, VR World.