Breaking, Exclusive, Gaming, Graphics, Hardware, News, Nvidia, Rumors

Nvidia Pascal Based Titan 50% Faster than GeForce GTX 1080?

Nvidia GeForce GTX 1080 in 2-Way SLI

While Nvidia will pull out all the guns to fight the AMD Radeon RX 480, releasing GeForce GTX 1060 as early as July 7th – our focus is slowly turning towards the real big gun of Pascal-based GeForce line-up. If our sources are correct, GP100 and GP102 were essentially the same chips, with the difference being memory and bus interface on the GP102 and PCI Express for the GP100. Feature set on both chips is the same, and there are no surprises.

We held the “Tesla P100 for PCI Express-based Servers” board in our hands just a few weeks ago, and just a few days ago, we managed to grab our hands on a GP100/GP102-based GeForce GTX Titan. As we reported earlier, this board will only come to market after the debut of Quadro-branded products. Both Quadro and GeForce cards come with the same heatsink, albeit in different color scheme, and “QUADRO” markings vs. “TITAN” on the aluminum-machined shroud.

Display configuration is similar to GeForce GTX 1080/1070, with one small change. Our board did not had DVI connectors on it.

The boards have 8+8-pin and 8+6-pin configuration, with the power connectors being placed in the front, rather than on top (which is the case with GeForce GTX 1070/1080). Bear in mind the PCB has routings for both, but the samples we played with all had front-placed power connectors. If the company ends up using 8+6-pin, you can count on 300W TDP, while the 8+8-pin configuration would give you 375W to play with. The picture below shows position of power connectors on a Tesla PCB, and as you can see, it’s an 8+6-pin configuration.

Bear in mind that the PCB design is really long. We’re seeing the return of a 30cm / 12″ long board here.

8+8+6-pin power connectors remain an option for GP100-based GeForce GTX Titan

What matters is the performance. At the time of writing, we can report that Tesla and Quadro clocks are complete, but the engineering team is still trying to extract the last bit of performance from the cards. Expect the chip packaging to remain the same as on Tesla and Quadro cards, thus we should see two versions of Titan boards, the 16GB and 12GB one, with HBM2 memory clocked higher than it was the case with the Tesla family (Tesla boards are geared towards HPC use – industrial design, ECC-enabled memory, 60-to-72 month lifetime cycle). One product will get 12GB GDDR5X memory, while the big daddy should still get HBM2 memory.

The target is at least 50% higher performance than GeForce GTX 1080 Founders Edition, and our sources are saying they’re now bound by the CPU. Even Core i7-6950X isn’t enough to feed all the cards and in a lot of scenarios you could see an Intel Core i7-6700K, with its supreme clock (4.0 vs. 3.0 GHz) easily feed the GP100 more efficiently than Broadwell-E based Core i7 Extreme Edition. The running joke inside Nvidia is “don’t buy the 6950X – buy 6700K and a Titan” but we’re not sure that Nvidia will use this for an official tagline. Truth to be told, they might be right – we need Intel to return to the 4-core next-gen mainstream/enthusiast and then X-core big-daddy part using the same architecture, rather than the current cadence which makes sense only if you’re working for Intel. AMD’s 8-core ZEN cannot come soon enough.

If all things go well, Nvidia should unveil GeForce GTX Titan X and P on Gamescom in Cologne, Germany – August 17-21, 2016.

  • Socius

    8 core us better than 4 core. I just switched my 3770k at 5.2GHz to a 5960x at 4.75GHz and my performance in all games went up. Dx12 games like Forza Apex, where fps would sometimes dip down to 50 with maxed out settings at 1440p on the night time race track were now maintaining 100fps (GSYNC limit of 100Hz).

    The only problem with high core count is lower base clocks that it comes with. But additional cores do help. So as long as you’re able to get a semi reasonable overclock on it, you’ll be much better off with a 6, 8, or 10 core cpu than a quad core.

    I believed a high clocked quad core like my 3770k at 5.2GHz would be the best thing for gaming. But I built a computer with a 6 core 5820k clocked at 4.6GHz and noticed oddly enough that it was outperforming my 3770k. So I finally decided to upgrade to the 5960x. Not the 6950x because it’s a terrible overclocker.

    • Piiilabyte III

      I’ve heard that the 6950x can be OCed to 4.5+ Ghz. The article is probably talking about non-OCed versions of the 6950x. I have a core i5 4690K OCed to 4.6Ghz and the CPU doesn’t even bottle-neck because of the GTX 1080s

      • Socius

        All review samples topped out at 4.2-4.3GHz for the 6950x. One guy i saw got to 4.4 or 4.5GHz but that’s a one off, from a hardcore overclocker that probably got a binned sample. I doubt your average 6950x will be able to beat your average 5960x at gaming, when both are overclocked. I have a pretty bad 5960x, but even it could be overclocked to 4.75GHz. And i haven’t seen any games that can take advantage of a 9th and 10th core enough to compensate for the loss in clock speed.

        • φnux

          Just read your posts and I think you’re right. It’s actually a continuation of the multi-processing (parallelism) trend started mid-2000s with dual-core CPUs. We reached a plateau at 4 on desktop since then, and server CPUs broke that barrier wildly during 2012-now (passing from 6 physical cores max pre-Sandy Bridge, to now 24 physical per CPU on Xeons E7). Mobile (ARM) skipped a few steps and in the name of efficiency (battery…) and user experience (multi-tasking, critical real-time ops like calls) went multi-core from the get go (currently 8-core Qualcomms etc., even async clocks between cores…)

          As of 2016 there are a bunch of AAA titles leveraging 4 cores. We mean physical cores here, because as everyone knows for gaming HT isn’t that great usually. The reality of users is that you rarely do *just* gaming, often there will be a browser in the background, some skype/comm app, the OS itself, and what have you. All of this is quite happy with 1-2 dedicated physical cores to keep the main app happy with its intensive 4-core workload. That’s 5-6 cores minimum (HT or not), and I truly think the next cycle of desktop (with Zen) will be just that, replacing 4-core i7s (6700K etc.) with 6-8 cores (in the $250-350 range).

        • Piiilabyte III

          Both the 6950x and 5960x fall prey to the 6700K and even 4790K, when we’re talking about gaming. Single-core performance is the indicator of a good processor for gaming. You can have a 1,000 cores but if only one or two are being used at the same time, you’re wasting the rest of the cores.

          Yes both the 6950x and 5960x are absolute monsters aimed at the professional world because I mean 8 core and 10 core IS a monster. In games though, they both fall prey to the 6700K (I couldn’t test the 4790K because I would consider it a waste of money).

          Just the same way that buying a 10-core or 8-core purely for gaming would be an absolute waste of money (unless you’re doing other things besides gaming which would make use of those cores).

          And yes, the 6700K goes above 5.2Ghz with the proper 3XFans for the radiator.

          • Brent Feinberg

            That is not the case for DirectX 12 games though. They can use up to 6 cores.

          • Piiilabyte III

            I just read about this and you’re absolutely right. DX12 performance tests from various credible sources state that DX12 can use up to 10 cores (Intel), but the result is almost identical that if you used a 6 core processor with higher clock speeds.

            I want your suggestion for a 6-core processor to buy. That whole DX12 killed my decision to buy a 6700K because it obviously won’t be future-proofed. I have my eyes on the Skylake i7 6850K, which is 6 cores, do you know of any better processors that are 6 core and Intel?

          • Brent Feinberg

            I think you still have some time. The most important thing for now is still being able to overclock your 4 core as much as possible. DirectX12 will become more and more popular over the next year or two, but if you are in the market for a processor now, then the one you suggest is probably what I would buy as well (note that it is a Broadwell-E chip as the E chips are always one generation behind). I might even consider the 6800K since running SLI won’t be necessary anymore once the next Titan / Ti card comes out (but if you are going to be running SLI still, then you definitely want the 6850K for the extra PCI lanes). Whatever you get, you want to make sure you can at least overclock it to 4.5Ghz.

          • Brent Feinberg

            One other note. I started playing the Division a couple of days ago with my wife (yes, we are late to the party), and that game absolutely eats my 4 core CPU trying to feed two Titan X’s in SLI with all settings maxed out at 3K resolution widescreen). I mean all cores on the CPU were hitting 80% utilization and I even got an overheat warning at 81 degrees from my monitoring program (not a big deal as the chip can go to 90 before throttling, but I was impressed). I think more and more games are actually using all 4 cores since the consoles support that many as well now – actually more). That is why I am actually considering an upgrade as well.

          • Piiilabyte III

            That’s very informative. I’m trying to future-proof my system (to an extent) and more-and-more games are using DX12. I think the choice for me (as I’m building a new system as opposed to an upgrade) would be 6850K to get the extra PCI lanes, the 128GB RAM (I know extreme overkill but I will be using it as a server as well), and last, but not least quad-channel bandwidth. This will also give me my favorite motherboard of all time, the ASUS Rampage V Edition 10.

          • Socius

            I’d like to see benchmarks that show an overclocked 4790k or 6700k beating an overclocked 5960x. You can’t compare stock clocks because yes, while the 4790k and 6700k are clocked much higher out of the box compared to the 5960x, after overclocking you’re going to close the gap substantially where the 4 core parts will have a 5-10% clock advantage at best compared to the 8 core chips. For example, a 6700k at 5.2GHz will have less than a 10% advantage in terms of clock speed compared to my 5960x. But my 5960x has a 100% advantage in core count. So even with the slightest bit of multi-threaded support, the 5960x will pull ahead.
            As I stated earlier…every single game I’ve tested so far has resulted in higher frame rate with my 4.75GHz 6950x compared to my 5.2GHz 3770k.

          • Piiilabyte III

            Games at most will use 6 cores in DX12. And very few games even support DX12 to its fullest. I’m sorry but I don’t see the 6700K OC at 5.2ghz being beaten by an 8 core processor OC at 4.75ghz.

            The good news for you is that your 5960x is a gosh darn monster and will be future-proofed for years to come. Besides, it’s the perfect breed for both servers and gaming at the same time. I’m a cheap guy so I’ll buy the latest generation 6 core proc instead.

          • Socius

            Lucky for you, dx12 sli patch for tomb raider came out last night. So I got to test it out. In order to maintain 165fps, it was averaging about 90% usage accross all 8 cores. Like i said…i’ve read all the anecdotal evidence. But real results are speaking for themselved. Let me know if there are any other games you’d like me to test for cpu usage.

          • Piiilabyte III

            See the norm was that single core strength was better than having more cores. Then DX12 came out using 6 cores at max. And I absolutely believe you about the 90% usage across 8 cores. This makes me think twice about the information that I read… because reality, as you pointed out, shows 8 cores being used. Hmmm…

    • Steven Davidson

      You’re probably seeing performance increases due to generational IPC gains more than going from 4 to 8 cores bro.

      • Socius

        Generational gains for intel top out at between 5-15% based on same clocks. 5820k is also just one generation ahead of the 3770k. And the 5.2GHz on the 3770k as opposed to the 4.625GHz on the 5820k more than makes up for any ipc improvements alone.

        • Jollyriffic

          where/are your settings on the 3770k for 5.2ghz
          i’ve delidded (ihs still on right now, waiting on the ek naked ivy addon bolts) but right now i’m doing 4.7ghz till it comes.

          • Socius

            I only use 5.2GHz for benching, as it requires 1.5v. My 24/7 clocks at 5GHz at 1.38v. It’s a binned chip that I bought off someone. Second binned chip, rather. My first 3770k could only do 4.8GHz. Second one I bought I was told could do 5GHz but couldn’t. Had to run that at 4.93GHz with 1.47v. Then I got this one. Delidded as well (IHS just resting on the die. Not glued back down), of course, using CLU. 5GHz at 1.38v means temps under max load are still under 60c.
            Nothing fancy in terms of the OC. Just using offset voltage. Everything else just standard as you’d see with any other OC. A lot of the OC potential, beyond the cooling setup, is down to the luck of the draw with your own chip.

    • Joshua

      And what kind of memory do you have and what speed are you running your memory at?

      • Socius

        Had dual channel 2400MHz at CL9 on my 3770k. Quad channel 3000MHz CL15 on my 5820k, and quad channel 2720MHz (2800 XMP, but lower due to BLCK adjustments) CL15 on my 5960x. Also had to drop from 64GB down to 32GB with the 5960x. Couldn’t handle 4.75GHz with that much memory.

        While from a purely bandwidth perspective, yes quad channel 3000MHz at cl15 is better. But 2400MHz CL9 is pretty much the most you really need for gaming purposes. I’ve seen exactly 0 benchmarks that have shown any memory beyond that being responsible for higher fps. Which is also why I’m not too concerned about switching out my 2720MHz ram on the 5960x with the 3000MHz one on my 5820k. Because it won’t actually make a difference in performance for gaming.

      • Socius

        I lied. Your comment had me curious. So I swapped the memory between my computers. Running 64gb of the 2800MHz ram at 2500MHz on my 5820k system (won’t detect all the memory at higher than 2500MHz). And put the 32GB of 3050MHz DDR4 CL15 with 1T timing on my gaming rig. Boots fine at 1.4v on the memory (stock voltage is 1.35v on this one). Figure 11% more memory bandwidth is still 11% more memory bandwidth. Especially since both sets were CL15, so get lower latency as well by going to 3050MHz.

    • vasras

      What cooling are you running to get 5960x 4.75GHz load stable?

      Thanks!

      • Socius

        It’s not a very good chip. For 24/7 usage I have it at 4.7GHz with HT off at 1.36v. With HT on I need 1.41 to be stable. Running dual-pump triple radiator setup for cooling. But I’m also cooling 3 Titan Xs at 1.5GHz GPU/8000MHz Memory at the same time. That can be hard to handle when both are running at max load. So I have a portable AC unit in my computer room to keep ambient levels stable. Haha.

    • Michael Norris

      Depends a game like GTAV will run better on a 6700k even stock then any of those cpu’s you listed. That is only one game but most games only use 4 cores my 6700k hardly breaks 34% running GTA and about any other game i played. For gaming a high OC will give you the best gains.

      • Socius

        Except…that’s not the case at all. I am getting substantially better performance in GTA5 with my 6950x at 4.7GHz than I did on my 3770k at 5.2GHz. Take a look at this screenshot. See the heavy CPU usage across all 8 cores? It would be higher, but I have FPS capped at 160 (5 below GSYNC limit). It is actually able to feed all 3 Titan Xs at 1.5GHz….. This is with max viewing distance/population density settings.

        I used to think like you. And I used to be right. But over the last 2 years, more and more games are providing better multi-threaded support. I used to run my 3770k with HyperThreading disabled in the past. Then I noticed that newer games actually get better performance with HyperThreading enabled. That’s what led me to believe that it’s time for an upgrade, because 8 real cores > 4 real cores + 4 virtual cores. And boy is it ever amazing. Playing Crysis 3 and GTA locked at 160fps at 1440p. Could never even dream of doing that on my old CPU.

        https://uploads.disquscdn.com/images/18943b19965418dbe760eb7640a2b33faa1fae664cdc7eeaed12f086b1fb8797.jpg

        • Javed Alam

          Then why do the benchmarks show such minor improvement between the 4770K and 5960X? – http://pclab.pl/zdjecia/artykuly/chaostheory/2015/04/gtav/charts5/gtav_vhigh_cpu.png

          EDIT: I just noticed in this other CPU benchmark that at higher resolution, (2560X1600), the 5960X at 3.00Ghz is better than the 4770K at 3.5Ghz, so it must only be at higher resolutions that the benefits of multiple threads become apparent. (http://www.techspot.com/review/991-gta-5-pc-benchmarks/page6.html)

          • Socius

            Because if your GPU is at 100% usage, it doesn’t matter if your CPU is at 5%, 50%, or 90% usage. You are limited by the GPU. If you have a ton of GPU power, like with my setup above, then you can really test CPU performance. You’re comparing a benchmark using a GTX 970, with my 3 voltmodded Titan Xs, running at 1.5GHz GPU and 8GHz memory.

          • Javed Alam

            Thanks, that makes sense, I hadn’t even thought of that. It’s rare that someone on a discussion board points something out to me that I was unaware of, so props.

    • randomguy48

      You are surprised in a performance increase with a 2 year newer card that had triple the release price?

      • Socius

        This conversation is about CPUs, not video cards.

        • randomguy48

          I know it is. I wrote the wrong word while multitasking at work and looking into any recent articles on Volta. My comment still stands. Would you like me to edit it for you?

          • Socius

            Yes. Please edit it so I can explain to you that the 6-core 5820k was priced less than $100 more than the quad core i7 that was out at the time, and that the surprise is from seeing games start to finally take advantage of additional cpu cores, whereas previously fewer higher performing cores in CPUs like overclocked i5’s, were beating out 6 and 8 core i7’s, to the point where even people with quad core i7’s would often disable hyperthreading to get higher FPS.

          • randomguy48

            “I just switched my 3770k at 5.2GHz to a 5960x at 4.75GHz and my performance in all games went up.”
            I am referring to your comment on the 5960x not the 5820k.

            My comment was about the fact you were trying to compare a $330-350 cpu from 2012 to a 2014 card around $1k.

            Now if you want to compare the 5820k to the 4790k which came out only a few months earlier then yes there is only maybe a $70ish price difference not counting the extra it costs to have a DDR4 motherboard. They do almost identical in gaming. The new titans may or may not change that.

          • Socius


            Being blind doesn’t help. From the same post that you just quoted:

            “I believed a high clocked quad core like my 3770k at 5.2GHz would be the best thing for gaming. But I built a computer with a 6 core 5820k clocked at 4.6GHz and noticed oddly enough that it was outperforming my 3770k. So I finally decided to upgrade to the 5960x.”

            That comment thread goes on to where I also say this:

            “Generational gains for intel top out at between 5-15% based on same clocks. 5820k is also just one generation ahead of the 3770k. And the 5.2GHz on the 3770k as opposed to the 4.625GHz on the 5820k more than makes up for any ipc improvements alone.”
            The 5960x is also just one generation ahead of the 3770k. So the same applies. And the 4790k would actually outperform the 5820k at stock clocks, but not when both are overclocked.
            Stop trying to be a smarty pants. You look foolish.

          • randomguy48

            That doesn’t take away from the comment you posted which is what I commented on. Yes it may out perform but most consider it not enough to justify the price difference. You can keep trying to pretend you didn’t say that but you did.

          • Socius

            I said exactly what I said. That even lower clocked 6-core processors can beat out higher clocked 4-core processors in many new modern games as they are increasingly taking advantage of the additional cores. You, on the other hand, are poor. And an idiot.

  • matt

    This would make the new titan p roughly 90% faster than the titan x… That is quite the generational leap

    • Kenrick Brown

      1 x 1.3 x 1.5 = 1.95

      Almost 2X the speed of the Titan X. Holy smokes.

  • Yuki_Sakuma

    My lack of knowledge is giving me hard time from understanding that a 6950x is a bottleneck and 6700k is not on a Titan Pascal.

    • Kite

      6950x can be overclocked as well, just not as high as 6700k. If you’re using this card for 4k resolution you won’t hit a CPU bottleneck with modern AAA games with either CPU even if they were at stock clock. For someone with high refresh rate monitor at lower resolutions with some DX11 games will hit a CPU bottleneck and the 6700k is likely to come on top with a higher overclock.

    • φnux

      Kite is probably correct; you also have to consider that nVidia people may not have been talking just about games. Everyone knows that a 6700K is essentially better at gaming in most cases than a 6950X, so the fact they made this remark probably hints to the domain where a 10-core $1800 6950X should shine: raw compute power, like rendering or HPC, folding, the things that bring CPUs to their knees and are parallel enough to 1. leverage many-core CPUs and 2. be applicable to GPGPU, i.e. run on a GPU instead of a CPU with much benefit.

      And *that* I think is where someone at nVidia did the math and saw that for the same budget, you’re worse off with a combination of a 6950X ($1800) + some GPU ($??) than you were with a 6700K ($350) + Titan P ($??). Remember, in IT it’s all about money, TCO, ROI etc. How much you can save for how much you pay, how long things take to “pay for themselves”, so there’s mo point comparing stuff any other way when you’re in the business mind of a GPU manufacturer.

      The horrible realization for consumers like us, if this is true, is that it places a minimum pricetag of 1800-350=$1450 for a Titan P (assuming the comparison was 6950X *alone* versus 6700K+Titan P and honestly that’s not even fair conceptually, it’s a dumb comparison, so yeah we’re probably looking at a $1500-$2000-ish Titan P. Like twice the old tag, and for the probably reason that it’ll be a hybrid pro/mainstream card with some pro features unlocked to some degree. Hopefully enough to make it an interesting proposition for prosumers looking to get a more versatile card than a Quadro, but currently can’t use a GeForce because it’s all locked down at the driver level.

      On a side related note, all this price gouging around Broadwell-E and Pascal, +$100 across the whole lines compared to last gen, and flagships now close to $2K, twice the previous threshold, is just blatantly stupid. I’d understand over a decade, but in all of two years, jacking prices up like that? Did half the chips factories in the world suddenly exploded and no one told us?…

    • Derreck

      In single-threaded tasks 6700k is better than 6950X because it’s based on a newer arch (Skylake vs Broadwell). And most of the games don’t use >4 cores so 6950 is not a very good choice for gaming.

      • Yuki_Sakuma

        But what if you Oced the 6950x to 4GHz would still be a bottleneck?

        • Derreck

          Well, it probably won’t be a bottleneck but it would be definitely slower than OC 6700k @ 4.7GHz, for example.

    • Piiilabyte III

      DX12 makes games use a max of 6 cores. The 6850K Broadwell-E processor for Intel would be the best investment IMO.

  • alex

    what stupidity they are talking about?? the 6950x can be O.C. to the same level as the 6700k, making it far better even so the core will be slightly less powerful. Not just that, any modern games is not using that much CPU as it used to be in some games, ex: crisis 3

    • lozandier

      You can’t compare existing (DX11) games with what DX12 & Vulkan will enable; leveraging multiple GPUs *and* CPUs were pretty much necessary hacks before such modern gaming APIs.

  • Hooligan1976

    There must be more cut on the GP102 or the chip will be huge. Will it really retain all the FP64 cores? That is 33% of the die size itself or so.
    FP64 isn’t used in gaming and having a chip size close to 600mm2 would be very expensive.

    How valid is this rumour/speculation. Would it really come out in August, that is quick.

    Perhaps this time there won’t be a “Ti” model and just two Titan models with 12 or 16GB and one slightly cut down?

  • hoohoo

    I hope this is a ‘compute’ Titan card and not badly crippled on the double precision side.

    • Piiilabyte III

      Nooooooo…. I want it to be a gaming flagship card with the FP64 cores turned into FP32.

      • hoohoo

        That could be the 1080Ti, maybe we both get what we want?

        • Piiilabyte III

          I hope so. I hope the 1080 Ti at least matches 12GB of GDDR5X or HBM2 I really don’t care. I heard it was based on the GP102 whereas the Titan is obviously GP100.

  • i am still gonna run titan x sli till Volta Titan arrives……i mean its not that necessary or big of a upgrade unless its pure new investment for a brand new pc.

  • WhiteSkyMage

    I bought my i7 5820K in December 2014. I have it OCed to like 4.0, can also handle 4.2GHz, but it heats a lot so i decided to keep it at 4.0. Now I am quite scared for this, even if it is a rumor… It seems like GPUs will be pushing CPUs, and it would be, for both, Moor’s law all over again, not bad, but a lot of money spending…

    • WhiteWolfKobachi

      Ok, quick question. I have i7 4790k oc’ed to 4.7GHz, you have yours at 4.0. Which is faster?

      • Jollyriffic

        his, by a large margin.

        • WhiteWolfKobachi

          In real time or just on paper..

          • Jollyriffic

            depends on your task, something that doesn’t use much cpu isn’t going to have any effect or so small you’ll never see it.
            if you’re doing something heavy, like say gaming and streaming at the same time with a preset of medium, his will obliterate yours. You’ll be around 75% while he’ll be 50 or less.
            think of it like this:
            you’ve got 4 cores @ 4.7 [ 4 x 4.7 = 18.8 ]
            he’s got 6 cores @ 4.0 [ 6 x 4 = 24 ]
            2 additional cores and 4 additional threads is a large difference in processing power.

            i was looking for a dedicated streaming cpu, was going to go with the 6 or 8 core haswell-e (forget now) but in short it had 2x the video encoding speed of the current 6700k. so if i was only able to do medium preset before for live streaming if i was on a 6700k, with the haswell-e i could do 2x slower and use the same cpu % (in theory, as i cant afford that cpu)

          • WhiteWolfKobachi

            Thanks for this, appreciate it.

          • Karlo Timbal

            I think

          • Kitty

            I think its not the way you compute the speed, core x GHz != total speed.

      • Piiilabyte III

        This is like comparing apples and oranges. One has higher clock-speed for 4 cores (per-core-strength), the other has lower clock speed but more cores, 6.

        DX12 uses up to 6 cores, so if a game is made to implement DX12 to that extent, then his processor would beat your’s. In other cases, for gaming, your’s is faster because of the per-core-strength

        • WhiteWolfKobachi

          So why not get the 4790k. It’s cheaper and you barely see any gaming differences. Not like you will see a bottle neck anytime soon.

          • Piiilabyte III

            Well sources are currently saying that they’re bottle-necking at the CPU level. The 4790K doesn’t have DDR 4, NVMe, etc. Also like I mentioned, in order to future-proof a system, you would aim at DX12 gaming and buy a 6850K processor (this way you could use U.2 because that CPU is 40 lanes and it’s a Broadwell-E).

            Take a look at my list: https://amzn.com/w/3FUN6EMRUYDTN

    • Yuki_Sakuma

      I have the same CPU (same OC as well) and GPU as yours and I am wondering also if it would be a bottleneck to Titan P/1080ti

  • Orim

    fp64 performance on GeForce GTX Titan P? 1/2, any guess?

  • Jon Jones.

    More clickbait bullshit.

    • In that case, thank you for your click. On a more serious note, Siggraph is coming up and the Quadro PX000 series is almost ready for the launch. Titan will follow sooner than most people think. BTW, the only reason why I saw the card was my doubt at its existence.

      • Gary

        Thanks for the early heads up 🙂 . Now the titan x has been officially announced do you have any idea when the full fat version is to be expected? I noticed at the bottom of the article it say “If all things go well, Nvidia should unveil GeForce GTX Titan X and P on Gamescom in Cologne, Germany – August 17-21, 2016.” Does X and P mean both versions to be announced this month?

  • HAL9K

    Incredible card. if i Can t find a 1080 classy soon will wait for this one.
    if the announcement happens as suggested when do you think we can start seeing deliveries? xmas?

  • Jurick Pastechi Genaro

    the titan P isnt going to be 50% faster than the 1080. Maybe 25%

    • 3584 cores at 1.45 GHz / 1.55 GHz boost vs. 2560 cores at 1.60 / 1.73? And of course, 320 GB/s bandwidth vs. at least 730 GB/s? People said it wasn’t possible that GTX 1080 was faster than 980 SLI, and what happened… Pascal is a new generation. Look at GTX 1060 – it’s a 256-bit chip and they didn’t enable the whole chip to beat the RX 480…