11:13 AM PDT: That’s all folks. Stay tuned to VR World for more updates from 2016 GPU Technology Conference.
11:03 AM PDT: Baidu introduces Self-driving Car Computer, based on Nvidia DRIVE PX 2 hardware platform.
10:58 AM PDT: Nvidia DRIVE PX AI Car Computer is designed to run between DRIVE PX 2, DGX-1 supercomputer and the actual vehicle, using partners such as Caffe, CNTK, KALDI, TensorFlow, Theano and Torch.
Nvidia’s DRIVEKIT achieved #1 accuracy score, with the remaining 8 out of 9 systems all being GPU accelerated.
10:52 AM PDT: DGX-1 system is sold out, at a price of $129,000. Nvidia claims it can replace “250 servers in-a-box”, where the cost of networking those servers reaches $50,000 on its own (network switches, cabling). Thus, creating a multi-PFLOPS 42-48U server will set you back $2.79 to $3.1 million.
10:48 AM PDT: Rajat Monga, Technical Lead and & Manager at TensorFlow discusses Google’s decision to open TensorFlow from an in-house tool only into an open source, ultimately becoming a business unit focusing on Deep Learning analysis.
Google started to work on TensorFlow and recently made the switch to the Pascal architecture. Performance now varies between 391.0 and 400.1 examples/sec, reducing the number of total_loss (for information) to below 12.0 as the number of samples increases. This is what can help predict the market analysis, and the whole goal of the project is to ‘democratize deep learning.’
10:45 AM PDT: Bryan Catanzaro, Senior researcher from Baidu discusses how Baidu was able to do a 30 times larger model on Pascal system than on Maxwell-based GPU setup.
10:37 AM PDT: Nvidia introduced its first complete supercomputer, targeting deep learning AI; DGX-1.
- 2U Form Factor
- 2X Intel Xeon E5 v4
- 8x Tesla P100
- 170 TFLOPS (CPU + GPU)
- AlexNet train time: 2 hours / 1 Node
The products will be launched throughout the second half of the year, with the product launch in Q1 2017. This means both high-end versions of NVIDIA Pascal architecture, as well as AMD Vega high-end architecture are all 2017 products.
10:25AM PDT: 3 years ago, we decided to go ‘all in’ on AI, ‘all in’ on HPC. And we decided to design a GPU architecture to accelerate deep learning, to accelerate AI, and to become perfect datacenter GPU.
The company spent up to 3 billion dollars and several thousand engineers to create the first-pascal based product: Tesla P100
- 150 Billion transistors on the card
- 10.6 TFLOPS FP32 Single-Precision
- 5.3 TFLOPS FP64 Double-Precision
- 21.2 TFLOPS FP16
- 80 TB/s internal bandwidth
- 14MB Cache
- 4MB L2 Cache
10:18 AM PDT: Mike Houston, one of Nvidia’s AI Research Leads demonstrates the Facebook AI Research, which was developed in collaboration with Nvidia. The demo is based on training the deep neural network (DNN), by inputting a 20,000 pictures for training overnight. After a night of learning, selecting “landscape” means that the AI is going to render the “landscape”, “landscape” + “forest”, take “sunset – cloud + beach” and an image is being rendered that shows sunset on a beach.
10:16 AM PDT: Talking about M40 and M4, dedicated to deep learning processors. This is also the way how Nvidia wants to replace the need for FPGA silicon with a deep learning algorithms running on GPUs, i.e. AI GPUs. Should we start talking about Intelligence Processing Units or IPU’s?
More on Tesla M40 high-power Deep Learning processor.
More on Tesla M4 low-power Deep Learning card processor.
10:09 AM PDT: There are over 1,000 AI startups that received over $5 billion in funding during 2015. This is the most important field worth investing.
“Cloud will be powered by AI.”
10:05AM PDT: “Using one algorithm, we’re now able to target problems, one problem at the time.” Using the new approach, you have a new algorithm called Deep Learning, you need a huge amount of compute power and you get the solution done. These are computer vision experts that dedicated their lives to see things using computer algorithms.”
10:03 AM PDT: “Deep learning is not longer just a field. I believe Deep Learning is way, way bigger than that. That is the reason why we went big and long on Deep Learning. We thing this is utterly going to change computing. Deep learning is a big deal and a brand new computing model.”
09:58AM PDT: Switching gears, the keynote session now focuses on Artificial Intelligence (AI). “2015 was a landmark year for AI. For the first time, AI was able to recognize images better than the human eye.”
Researches at Berkeley University were able to teach a robot how to control his movement, using 8,000 images.
09:54AM PDT: Introducing IRAY VR Lite.
09:53 AM PDT: Jen-Hsun shows the new corporate headquarters courtesy of IRAY VR, with 100 lightprobes rendering the scene in photo realistic fashion. The building is located across the current headquarters in Santa Clara, plan to host 2,000-3,000 NVIDIA Employees in Phase I.
09:47 AM PDT: The Mars 2030 demo was run on two GeForce GTX Titan X cards. However, according to Jen-Hsun, that is not enough. There has to be more compute performance available to create a realistic scene.
With that said, the company is introducing IRAY VR. Available in June 2016, iRAY VR creates 100 ‘lightprobes’ i.e. ‘lightfields’ rendered in 4K resolution. Then, Ray Tracing is moved into pixel rendering based from the point of each eye and then immerses the user into a photorealistic environment.
It takes one hour on a 8 GPU system to create a photorealistic room. Quadro M6000 does the rasterization and composition, integrating itself into VR headsets.
09:42 AM PDT: Steve Wozniak talks to Jen-Hsun over the video link, discuss the Mars 2030 VR experience. Steve is the first person to try the Mars 2030 Virtual Reality experience, and picks the controller upside down, getting immersed in the experience.
SW: “I’ve gotten dizzy, I am going to fall out of my chair.”
JHH: “Thanks Woz, this was not a helpful comment.”
09:31 AM PDT: VR is a new computing platform, not the new technology. Jen-Hsun discusses the advantages for the VR such as “creating cars in VR”.
“VR can take you to places that you could only dream off. Places that are out of our ability to reach. Some of the things I am super excited about is how VR is going to transform communications.”
“HoloLens is truly groundbreaking technology.”
“Science becomes a reality.”
Introduces the Everest VR demo. 10 million polygons taken from over 14,000 images. 108 million pixels.
09:29 AM PDT: Introducing GIE: GPU Inference Engine (Available in May), Jetson TX1 goes from 4 images/second/Watt to 24 images/second/Watt for Deep Learning.
That’s 240 images/sec and infer information from it.
09:28 AM PDT: DriveWorks will be released in Q1’17, available today for select partners. Working on expanding the ecosystem.
09:20 AM PDT: NVIDIA libraries: ComputeWorks, GameWorks, VRWorks, DesignWorks, DriveWorks, JetPack.
ComputeWorks consists out of CUDA 8 (available June), cuDNN 5 (April) and nvGRAPH (June).
IndeX plugin for ParaView (May) – world’s largest visualization tool.
09:18 AM PDT: NVIDIA SDK is available today from develop.nvidia.com
09:07 AM PDT: The Keynote Hall of McEnery Conference Center is packed to the gills.
08:55 AM PDT: We’re live for the 2016 edition of GPU Technology Conference, annual event organized by NVIDIA. We’ll be posting live updates as the keynote session begins. If you want to follow the livestream here: