In a slew of announcements, today, NVIDIA has announced their cloud computing friendly GPU. This GPU is purpose built specifically for Cloud Computing. This card is not a single chip solution, but rather a multi-chip solution that delivers many different cores to a single PCIe slot. It will allow the enterprise to deliver the PC experience to many remote users regardless of their resolution and graphical demand.
NVIDIA’s GPU strategy gets a third pillar – the Cloud
The VGX board itself is actually four mid-range Kepler GPUs on one graphics card. This could be considered something like a dual K10 TESLA card, except for the fact that NVIDIA has not disclose whether or not the cores on the board are GK104, GK110 or something entirely different. All NVIDIA waws willing to confirm was that the card had mid-range GPUs which leads us to believe that these are likely GK104 based like the K10 dual-GPU TESLA card. The card will feature 16GB of frame buffer (RAM) which amounts to about 4GB of RAM per GPU, which in the grand scheme of things really isnt much when you consider that the current generation of TESLA Fermi GPUs feature 6GB per GPU. NVIDIA themselves admitted that they need as much frame buffer as they can physically get for their GPUs, so there’s a good chance we’ll see even larger memory cards in the future.
Industrial Light & Magic demo: 100 displays running effects and rendering of ILM movie effects… all run through a single 2U server with 16 GPU inside
While the clockspeed for the GPUs on the VGX cloud computing graphics card are unknown, the total wattage of the card is known and is astonishingly low for a four-GPU GPGPU part. NVIDIA claims that this card is capable of consuming only 150W which is an amazingly low figure when you consider that their past single-GPU parts were well over 200W. The amazing decrease in power consumption could be in part to the fact that the GPUs in the VGX card are severely downclocked. When we asked NVIDIA about the clockspeed and price, they said that both were to be determined. They did confirm, though, that clockspeed wasn’t as criticial for the VGX in the applications that it would be used in. As such, they are likely to downclock the card significantly enabling them to have better yields on those graphics chips for the card which should significantly drive down costs for NVIDIA.
How the VGX work in a business environment – NVIDIA teamed up with Citrix
Their VGX model for cloud computing, though, is a two fold model which involves both the hardware as well as software that is specialized for certain cloud applications. Because of the way that the VGX cloud architecture is designed, they have a software module of the VGX architecture which exists within the Hypervisor of the VM pipeline which will vary between three different types and prices will vary as well. Having different USMs (User Selector Machines) enables a single VGX card to be able to drive different types of graphical loads. Currently, NVIDIA sees three different USMs with one being the standard NVIDIA USM, NVS USM and Quadro USM. Each of these will serve different purposes and will be licensed separately from eachother and could stand to be a bigger profit model for NVIDIA than the card itself. Even though NVIDIA has said that they want to be competitive and give people a reason to switch, so there’s a good chance that the prices will be lower.
Currently, the frame buffer is the greatest limit of this card. Once they discover a way to get more frame buffer on this card they will be able to drive more cloud VMs through it. Currently, NVIDIA is capable of running over 100+ users at 1920×1080 off of a single card. They expect the realistic environment to be more around 40-60 users per card since most users will be using quite a bit of graphical horsepower beyond just pure resolution. If the VGX card runs out of frame buffer due to too much load it will refuse to issue any more VMs and the administrator will be aware of the situation.
We will keep you updated on this front as we get more information about NVIDIA’s VGX pricing and business model. Currently, our biggest concern is that their cloud computing model will further incentivize users to use tablets and laptops to do their graphical processing. This is concerning for NVIDIA because it may further eliminate their low-end market which could hurt their yields since many of NVIDIA’s midrange and lower end cards eat up quite a bit of their manufacturing costs.