AMD, Breaking, Business, CPU, Entertainment, Exclusive, Gaming, Graphics, Hardware, Japan, News, Rumors, Sony, Virtual Reality (VR), VR World

AMD Confirms Sony PlayStation Neo Based on 14nm CPU and Polaris?

During AMD’s 2016 Investor Day, the company disclosed its growth strategy. Many of the focal points are in motion already, or even public. For example, AMD sealed the deal with THATIC, Chinese government owned investment fund – which will see their x86 processors enter Chinese enterprise and government sectors in a big way. Second part of the strategy was to finally become a fabless semiconductor, with all the risks and rewards involved with that move. The company spun off its packaging facility, making a total cash intake in excess of $664 million – plus all the unannounced grants that the company is bidding for.

Still, perhaps one of the biggest future announcement was carefully hidden on the slide below:

AMD Growth Strategy

We wondered what the “new semi-custom business in 2H 2016” statement mean. Our baseline was that it’s probably a placeholder for the Nintendo NX console, which should be announced during Tokyo Game Show 2016. However, according to sources in the know, this placeholder has another existing customer, whose name is SONY.

During a recent interview, Roy Taylor – AMD / RTG’s head executive for Alliances, Content and VR went to defend the decision to only launch mainstream and performance products based on the Polaris architecture. “The reason Polaris is a big deal, is because I believe we will be able to grow that TAM [total addressable market] significantly.”

With Nvidia launching the GeForce GTX 1070 and 1080 for the Enthusiast market, and preparing its third Pascal chip – the GP106 – to fight Polaris 10/11, such approach might be a costly mistake. if the Radeon would be the only product family where the Polaris GPU architecture would make an appearance. In the same interview, Roy said something interesting: “We’re going on the record right now to say Polaris will expand the TAM. Full stop.”

AMD's Roadmap called for 2016 to be the "Year of Polaris"

At the same time, we managed to learn that SONY ran into a roadblock with their original PlayStation 4 plans. Just like all the previous consoles (PSX to PSOne, PS2, PS3), the plan was to re-do the silicon with a ‘simple’ die shrink, moving its APU and GPU combination from 28nm to 14nm. While this move was ‘easy’ in the past – you pay for the tapeout and NRE (Non-Recurring Engineering), neither Microsoft nor Sony were ready to pay for the cost of moving from a planar transistor (28nm) to a FinFET transistor design (14nm).

This ‘die-shrink’ requires to re-develop the same chip again, with a cost measured in excess of a hundred million dollars (est. $120-220 million). With Sony PlayStation VR retail packaging being a mess of cables and what appears to be a second video processing console, in the spring of 2014 SONY pulled the trigger and informed AMD that they would like to adopt AMD’s upcoming 14nm FinFET product line, based on successor of low-power Puma (16h) CPU and the Polaris GPU processor architecture.

SONY PlayStation VR Contents of retail package

The only mandate the company received was to keep the hardware changes invisible to the game developers, but that was also changed when Polaris 10 delivered a substantial performance improvement over the original hardware. The new 14nm FinFET APU consists out of eight x86 ‘Zen Lite’ LP cores at 2.1 GHz (they’re not Jaguar cores, as previously rumored) and a Polaris GPU, operating on 15-20% faster clock than the original PS4.

According to sources in the know, the Polaris for PlayStation Neo is clocked at 911 MHz, up from 800 MHz on the PS4. The number of units should increase from the current 1152. Apparently, we might see a number higher than 1500, and lower than 2560 cores which are physically packed inside the Polaris 10 GPU i.e. Radeon R9 400 Series. Still, the number of units is larger than Polaris 11 (Radeon R7 400 Series), and the memory controller is 256-bit wide, with GDDR5 memory running higher than the current 1.38 GHz QDR. Given the recent developments with 20nm GDDR5 modules, we should see a 1.75 GHz QDR, 7 Gbps clock – resulting in  224 GB/s, almost a 20% boost.

Internally known as PlayStation Neo, the console should make its debut at the Tokyo Game Show, with availability coming as soon as Holiday Season 2016 – in time for the PlayStation VR headset.

  • Bob.H

    Makes sense. What would be the name. “PlayStation 4 HDR” ? I also guess FreeSync support.

    • Kyle Pazandak

      I am going with the Playstation 4k. It is a playstation 4, It will display 4k video and hence the K. The name also shows commitment to the Playstation Brand similar to apple with iPhone #(s).

      • Paul17041993

        playstation neo…

    • Psionicinversion

      it will have freesync support but your TV sure wont be able to do it

      • jahramika

        I bet that’s coming to. CES 2016 AMD had HDMI 2.0 freesync on display

        • PuiuCS

          true, but seeing freesync on TVs… i just don’t see it happening. at least not anytime soon.

          • El-Cid

            Being the new Ps2 Sony will use amd’s new Polaris which supports freesync it will be simple to include it on all Sony tvs. Might not even on all tvs sure Sony will use it as a selling point.

          • PuiuCS

            we’ll see

    • jahramika

      hehe

  • DeathMade

    What kind of article is this when you say it is based on Zen in the headline and in the actual article you say it is not based on Zen?

    • A Low Power ‘relative’ of Zen architecture – Jaguar and Puma are no more. Article is updated to reflect the change. Apologies for the confusion.

      • Sykobee

        AMD would have kept development of that pretty secret then. Are you sure it’s not another cat core – Cheetah or Tiger – that AMD designed but didn’t put into silicon before?

        The fact the clock gains are very low suggests that the Neo will also be the Slim.

      • Joe_HTH

        Except every single piece of data on the Neo shows that it is indeed an up-clocked Jaguar CPU that is in the PS4.

      • Eddie Battikha

        I don’t understand why on extremetech they say u said 4 zen lite cores when u said 8.

  • H23

    investors day is tomorrow, this is from a slideshow dated february?

    • That’s Annual Shareholder Meeting. This comes from Institutional Investor Day, and the presentation is available for general public to download from ir.amd.com, cryptically named “AMD Investor Presentation”.

      • TheDizz

        Their PPTs and PDFs are some of the best in semicon industry. Really classy.

  • Mr Buster

    gtx 1070 is NOT enthusiast market. $380 for card that’s not that far behind 1080 is a murder on amd and it’s Polaris. AMD may brag about expending their total user base (due to deals made behind closed doors with Sony and who knows who else), but on a free PC consumer market they’re in serious trouble if the leaks about polaris’ lackluster performance vs pascals are real. As much as i’m cheering for AMD, unless they got ace up their sleeve, gtx1070 is gonna wreck their whole business plan.

    • Quikmix

      Think about it this way:

      All of those console games are now going to be developed and built around an AMD GPU. That means that any game that goes cross-console (and PC) will have been developed on an AMD GPU architecture. That means that those technologies/capabilities get coded for OVER something like, say, hairworks from NVidia.

      Just some food for thought.

      • Paul17041993

        They already use AMD GPU units, just that the newer console/s will have the 4th revision of said units.
        Pascal is basically a direct clone of GCN’s base design for this reason and many others.

        • jahramika

          No it is not

          • Paul17041993

            Look at the designs yourself, side-by-side pascal and GCN are almost completely identical, maxwell was even pretty close except they screwed up the cache system by making the clusters too wide (128 instead of 64), which in turn made it drastically under-performing in heavy loads.

          • PuiuCS

            since it doesn’t feature ACEs i would argue that Pascal is not a direct clone.

          • Paul17041993

            I believe pascal has at least a little bit of hardware scheduling, but whether or not it’s true async I haven’t seen yet…

          • PuiuCS

            it only simulates ACEs through software, it doesn’t have true async functionality.

      • Phat Tuna

        What’s to think about? They already use aMD for current consoles, yet multiplat games still have compatibility issues on amd on pc compared to nvidia.

        You know… just some food for thought.

        • Quikmix

          but will there be cake? No cake, no deal.

        • astrix_au

          it will be the case when devs use GPUOpen.

        • PuiuCS

          that’s false and you know it. there are no “compatibility” issues.
          as far as i know it’s nvidia that always needs to release “fix” updates.

    • Mageoftheyear

      The GTX 1070 reference card (know as the “Founder’s Edition”) is launching at $450 – not $380. The MSRP for AIB partner boards is $380 but they probably won’t be sticking to that and won’t be available at launch either.

      The R9 490 will compete with the GTX 1070. Not the GTX 1080 – it isn’t designed to. Polaris’ specific objective is to dominate the mainstream market. Vega 10 is what will challenge Nvidia’s 1080 and new Titan.

      • Ringo’s Revenge

        Incorrect there are 2 Vega chips inbound. The Smaller vega chip will compete with 1070 and 1080, likely without HBM1/2 for cost reasons and then the larger with HBM2 to compete with 1080ti and Titan.
        Vega11 and 10
        4096 cores and 6144 cores

        • Hectoron

          And what memory technology will the 4096 shader Vega use and is it Vega 10 or 11 for 4096?

          • Ringo’s Revenge

            That I do not know. I suspect GDDR5x, but then again maybe they surprise us and use HBM2 on both, though from a cost point of view they would be losing the profit battle upfront.

          • Cain Marko

            supposedly HBM2 will be ready by September, which is why vega was moved to october.

          • Anukul

            vega 10 for 4096, vega 11 is even bigger.

        • Mageoftheyear

          Where have you heard that there are two Vega chips? :/
          Look at the roadmap in the article. AMD have only announced Vega 10.
          Polaris 10 and Polaris 11 will only address the R9 400 series and R7 400 series respectively.

          Furthermore using HBM2 isn’t just a matter of cost, it’s a matter of availability.

        • jesse

          Sounds good, AMD also has a minor advantage choosing the 14nm tech, this is the perfect time for them to catch up.

        • MLSCrow

          To be honest, based on the numbers we had from Fiji, and how much more efficient Polaris is, I believe that the 4096 sp version of Vega will compete probably between the 1070 and 1080, while the 6144 version will compete with the 1080Ti, and when NVidia releases the next Titan, AMD already has planned on a Pro Duo 2, with 2x 8192 Vega chips on one board to take the crown as they have for the past couple years.

          • jahramika

            Well because honestly going forward with more die shrinks it will become impossible to make large die GPUs at a reasonable cost without a ton of defect. It’s much easier to make smaller/multiple dies. Its not about AMD it’s about the industry

          • chrisday85

            No. The Polaris 2304 core is competing with a 980ti at 800mhz according to rumors, and it was just released that is the mobile core variant. The card will also be 1250 MHz.

            A 4096 core Polaris would compete with the gtx 1080 for sure but we don’t know if they are releasing one.

            A 4096 vega will not be between the gtx 1070 and 1080. Given that the cores are being modified to gcn 5, not the gcn 4 which saw the huge increase, this will be a new core architecture and should have the normal increase of at least another 30% generation to generation if it is a normal bump instead of a huge one like Polaris.

            Do the math. You have nearly twice the cores plus 30% of a product that performs at a gtx 980 ti with a clock speed that is 30% slower than what it will be. You basically grab the 980ti, and double it. The gtx 1080 runs as fast as two 980’s in sli, not as fast as two 980’s total. Do you understand the difference in that? Two 980ti’s do not perform twice as fast as one. Not even close.

            But this card will perform twice as fast as a 980ti at absolute minimum.

            But, I left something out on purpose, didn’t I? HBM 2 memory. The 4k memory bandwidth will be broken with the vega and that is where this card needs to break through to compete with the gtx 1080x which still cannot break that and has a memory bottle neck. The vega 4096 will crush the gtx 1080 at 4k and this is why the gtx 1080 successor is coming out next year with hbm itself. They need it for 4k.

            Then amd will have its 6144 core vega. Things are looking good for amd. I’m getting that 6144 core vega card most likely, and finally will have a single card 4k solution.

        • chrisday85

          A 6144 core vega is coming out?!?!?

          Oh man that is awesome. Thank you for this info. I was worried it would just be the 4096 which might actually just beat the 1080 x of course handily but would not beat the 1080 successor.

          I really want to support amd when I finally get a single card 4k solution, I wasn’t sure the vega could handle it, but I’m fairly sure a 6144 core could.

    • Chaos

      Those rumors put Polaris 10 in 380X level of performance… Either those rumors are fake or Polaris GPUs had some form of bottleneck (early drivers, low clocks etc.).

      At 232mm2 for Polaris 10 (Hawaii is 438mm2) and with 14nmFF offering twice the density over 28nm engineers at AMD would need to be mentally challenged to create a GPU that’s slower then 390x. And all of this is without taking into account improved uarch and/or increased clock speeds.

      All it needs to do is exceed 390x, AMD price it at 300$ and voila.

      • johnnyorgan

        It’s not about having the most powerful card. They’re attempting to make VR ready cards that are cheap and power efficient.
        Don’t expect a Fury X beater this year, unless they’ve progressed much faster than originally anticipated. Expect similar performing cards but with less power and (hopefully) dirt cheap.

        • Cain Marko

          VR for everyone that’s the whole point of “TAM” comment, think your looking at a 470/x doing 290 performance. 480/x doing between 390/x and fury, with a 490/x on par or better than the 1070,then vega 10 comes and competes with 1080/ti.
          that doesn’t look like a bad midrange line up at all, and i can only hope that is whats coming.

        • jahramika

          A 390 is Vr ready

    • Funny thing is, I saw Polaris 10 running at 1050 MHz, and all the leaked results ‘on the Internets’ are quoting the 800 MHz version. Is this deliberate or not, I do not know atm – but I believe AMD is deliberately not showing its cards. Especially if the clock level in the AMD Graphics Manager prove to be correct. 1-2 GHz for GDDR5 memory, 500-2.0 GHz for the GPU. And I don’t think Polaris can pull 2 GHz easily.

      • Psionicinversion

        800mhz are engineering samples not what the final silicon can do

      • chrisday85

        I just saw a stellar reveal that the card might be clocked at 1350 MHz.

        Considering how well it did at 800mhz I’m excited. Apparently this Polaris 10 is a mobile version too. I wonder why the ps neo isn’t clocking it at somewhere similar.

        • Consoles operate with a lower power budget and lower allowance for heat due to their small size. Also, with consoles both are put into an APU not a dedicated unit, which limits things.

          • chrisday85

            A – Regarding a lower power budget not by comparison to gaming laptop with a Polaris 10 and CPUs in laptops run on higher frequencies, there is simply no way the consoles will use more power on an apu which actually had a lower power draw combined than dedicated combined in a laptop. Also, the console doesn’t have any reason to require a lower power draw. A full Polaris 10 has 86 watts at most.

            B: There is also no way that a console will have more heat problems than a laptop component.

            You are parroting old talking points and are wrong on them as is.

          • I was comparing consoles to desktops not laptops. So, no I am not parroting old talking points. You are wrong to assume I meant laptops when actually I meant desktops.

            So, the point is that consoles need to use less power than *desktops* so they cannot afford to use a Polaris 10* with the full clockspeed of Polaris 10. They might even disable some of the cores for this reason as well. You need only look at the current PS4 GPU for both being in effect. It is based on the Radeon HD 7870 but has 2 Compute Units disabled and is downclocked.

            *BTW Polaris 10 is for mainstream gaming desktops not laptops. Polaris 11 is the laptop (and lower end desktop part). AMD were rather confusing about their numbering.

          • chrisday85

            Wrong. The Polaris 10 is aimed at the mainstream market correct, but it is exactly the same as the Polaris in notebooks. The reason is the power draw.

            They are so power efficient they are using the same part for laptops as desktops.

            The Polaris 10 is going in the ps neo, but also is going in laptops which will be clocked at 1250 MHz and there is no reason a Polaris 10 would have more heat problems or power draw problems in a laptop as opposed to ps neo.

            I just gave links proving as much, with regards to power draw.

          • “The Polaris 10 is aimed at the mainstream market correct, but it is exactly the same as the Polaris in notebooks.”

            The same architecturally? Yes. The same in terms of power draw? No. Because it uses *more Compute Units* and *higher clock speeds* than Polaris 11. So there are differences. And it is Polaris *11* that is going into laptops. AMD themselves have been saying all of this. I think that is the most reliable source bar none.

            Okay, some hardware vendors may end up putting Polaris 10 into high end thick gaming laptops despite AMD’s intentions, but that is another matter.

            As for your links they prove only the power draw of the lower end laptop parts, and *not* the 36 Compute Unit part rumoured to be in the PS4k. In fact many leaks say the 36 CU part has about 130-150 watt power draw. No way a console can use that. It’d have to tone clocks down.

          • In fact it is rather logical that a 36 CU part that is meant to replace the R9s would use 130-150 watts. It simply is not possible at the moment to do current R9 levels or better at lower power draw.

          • My point was that it is simply not possible to fit a full 36/40 Compute Unit desktop Polaris at 1200-1400 MHz into the console’s power and heat limitations, especially inside a SoC (I actually should have used that term earlier not APU).

          • chrisday85

            Ps4 total power draw:

            150 watts.

            http://energyusecalculator.com/electricity_gameconsole.htm

            Polaris 10: 86 on max load.

            Apu’s combined for a fact use less wattage than separate dedicated parts.

            Kabini used 9 to 15 watts.

            http://www.theverge.com/2013/5/23/4357938/amd-jaguar-performance-revealed-kabini-temash

            The power draw of a zen at 2.1 ghz would be 20 on a bad day, especially considering the 14nm and the low clock.

            We have 106 total power draw at max from those. This easily leaves enough for a 150 power draw.

            You’re just bluntly wrong here.

            Before you parrot a talking point look up power draws in advance.

          • chrisday85

            12 cu’s, more than double the clock of ps4, larger die size,

            45 watts optimized.

            http://www.extremetech.com/computing/187254-amds-kaveri-onslaught-new-apus-better-pricing-and-lower-power-consumption

            This is a slightly smaller gpu than the one in the ps4.

            The apu’s they make are more efficient not less. This is equivalent to a 265, let’s just take a m260x power draw on its own:

            http://www.game-debate.com/gpu/index.php?gid=2294&gid2=1852&compare=radeon-r7-m260x-vs-geforce-gt-740m-64-bit-edition

            35 watts. The gpu on its on is within 10 watts of with the cpu.

            Their apu’s are more efficient not less.

        • Even if al you said earlier is true, there is another factor. 14 nm FinFET is a new process node. When process nodes are new there is more defective and partially defective chips. Consoles are at a disadvantage when it comes to dealing with this. With PC parts when possible they disable some elements of the partially defective chips and sell them as one of the lower end CPU/GPUs. Consoles cannot do this since there is only the one, so they have to bin all partially defective chips.

          However, one method that Sony and AMD can usee to mitigate the risk is to produce a simpler SoC whether they disable some cores, lower clock speeds, etc. This would result in fewer binned chips and thus lower overall costs. Given the fact consoles need to be manufactured and sold at low prices compared to even equivalently spec’d PCs this is a strong motivating factor for both Sony and AMD.

          • Bert Tweetering

            They certainly will have spare cores and gpu units to reduce the wasted cpu’s resulting from misprints. Plus, by the time these are in production (and on shelves, summer or xmas 2017) there will already be some experience on the new node.

            Also, it might be possible to salvage remaining misprints for use in graphics cards.

            I have a hard time believing 8 physical cores and a hefty gpu on the same die (plus the spares), since zen is a large core. 8 logical ones, i.e. four cores, and less L2 and perhaps no L3 cache would have been my guess.

          • Actually, at the moment all indications point towards a release late this year, so AMD and Global Foundries/Samsung won’t have a lot of experience with 14 nm FinFET.

            Misprints *cannot* be salvaged for use in graphics card. It is integrated into an SoC, therefore the CPU, GPU and control chip are printed as a single unit. You cannot simply remove the GPU and use it elsewhere.

            Also, the consoles use customised GPUs, including new features or feature changes that are not in other GPUs. Therefore they don’t match any desktop or laptop GPU, so cannot be used as one.

            As for the size of Zen, again they use custom CPUs not exact versions with hardware changes (hence why PS4 and XBO using “only” Jaguar is not as straightforward as people think). This is likely what they are referring to in the article when they say “Zen Lite”. It is quite possible for AMD to make a custom lighter Zen CPU for consoles. For example, simultaneous multi-threading would probably be removed since consoles need to keep things as close as possible to the older model, at least for now.

            However, I am somewhat skeptical of Zen Lite since Zen does not look like it will be ready for release late this year, which is when PS4K looks like it will be released.

          • Bert Tweetering

            There is some debate on the SemiAccurate forum on how generic or standard these semi custom console chips are; and I wonder how generic or quirky the GPU units are. It’s possible the GPU is best kept standard, to simplify writing the drivers.

            PS4 is very strange in that it uses mostly GDDR5 for main memory for the APU (though there is half a gigabyte of DDR3, probably for bootstrapping the booting of the OS, and possibly for the whole OS).

            Zen should be released later this year. Though, yes, it seems the AM4 release seems to be late (I was expecting it already, but I guess we’ll likely see those boards in June).

            It would be a surprising feat if they managed to release this on 14nm and have consoles shipped to retailers by this December. So surprising that I think either the 14nm news is an incorrect rumor, or that PS4k won’t be ready this year.

          • Where did you hear about this 512 MB of DDR3? Sony nor developers have ever mentioned it in any statement or document that I have seen.

          • Bert Tweetering

            the specs on wikipdia, although I got my numbers wrong (it’s 256MB):

            “Memory
            8 GB GDDR5 (unified)

            256 MB DDR3 RAM (for background tasks)[9]”

            https://en.wikipedia.org/wiki/PlayStation_4

            also, interestingly from the wiki specs, it has a secondary processor, (maybe the arm cpu as in all recent AMD non FX chips?)

            “Secondary low power processor (for background tasks)[9]”

            [9]: http://www.ifixit.com/Teardown/PlayStation+4+Teardown/19493

          • Okay, that is rather interesting. In fact it may hint at how Sony achieved the “extra 512 MB of memory that developers can use” in the PS4k. Perhaps they increased the ammount of DDR3 by 512 MB (or better yet upgraded to DDR4, which may be necessary if they are using Zen) so that less of the GDDR5 is used for background tasks and thus developers can use more RAM for their games.

            Thanks, for the links.

            EDIT: Changed wording a little for clarity.

          • jahramika

            Samsung I’d on generation 2 of 14nm I disagree with your experience argument.

    • Sykobee

      It’s $450. The $380 is a price cut down the line when they have non reference cards.

    • johnnyorgan

      When the GPU alone costs more than a home console, it’s an enthusiast’s card.

    • jahramika

      Okay well first off nothing Nvidia can do to wreck AMD’s business plan for Consoles and Desktops CPU and APUs since they can make neither. If your victory id high end PC GPU market then you need to rethink your words. Even Nvidia is search for car parts and deep learning because there high end PC GPU plan is not enough alone.

    • chrisday85

      The gtx 1070 is not much faster than a gtx 980ti. You’re aware of this correct?

      The amd Polaris 10 is close to a 980ti as well based on rumors. The Polaris card will have a $299 cost, and power consumption that is far lower than the gtx 1070.

      The Polaris 10 ran with an average of 86 watts when tested. The gtx 1070 hasn’t been tested but we know the gtx 1080 is 180. Most people are estimating 125-150 for the gtx 1070, I have not seen official numbers.

      The 1070 will by no means wreck the Polaris 10.

      The Polaris is an amazing power increase. It’s roughly the power of a Fury X with al ost 50% less cores, no HBO memory, and half the power draw for less than half the cost. Throw two of these cards together and it is still less than a 1080 x, and still has a lower power draw average. Most people don’t buy the flagship card. The Polaris 10 is perfectly positioned.

      When they pump that up to higher clocks, more cores (4096 minimum planned, possibly more, 2304 current) and remove the memory bottleneck for 4k with the hbm with vega, yeah. That will show how awesome their new cores are.

    • chrisday85

      Edit, apparently they are making a 6144 core vega. Yeah, the Polaris will mop the floor mid market and the vega 6144 core will have more than enough power to be competitive.

    • jahramika

      Lol retard article is not about nvidia

    • PuiuCS

      anything about $300 is considered to be on the high end/enthusiast side. it’s expensive no matter how you look at it.

  • Mageoftheyear

    Damn, this is exciting news. I’d never considered Roy’s statement on expanding the TAM for VR to include their work with Sony on the Neo, but the amount of certainty he put forth makes sense for that.

    I’m dreaming, but how awesome would it be for PS4K users if it came with an internal SATA III connector? The loading times on the PS4 are awful and thanks to the interconnects being the bottleneck replacing the standard HDD with an SSD doesn’t improve things at all. The 6Gb/s bandwidth is really needed.

    • Paul17041993

      The link would have to be SATAIII as that’s the only version the APUs got (last implementations of SATAII were on AM2/AM3), however something else is likely bottlenecking load speeds, could just be the particular games unless someone benched data copies on the PS4 directly.

      • Mageoftheyear

        Yeah, anything over SATA III would be a no-go because IIRC SATA Express makes use of PCI bandwidth. I don’t know much about the custom PCB of the PS4 but I’m pretty sure there’s no provision for that and making one may be cost prohibitive.

        There may be other bottlenecks as you suggest.

    • jesse

      I wasn’t impressed with PS4, and decided to never to buy one, but this PS4k looks tempting.

      • Mageoftheyear

        Indeed. I’ll wait for some reviews from VR sites on what the experience is like. If they give it the thumbs up then I’d definitely consider buying a PS4K+PSVR bundle.

    • Yeah, it would make sense. Sony says PlayStation VR will work with the OG PS4… and I am sure it will. But well enough to expand the TAM for VR? Probably, but working with PS4K would do a *lot* more to expand the TAM, especially if there is Polaris and Zen in the PS4K.

  • Hussein Jama Hussein

    1 word, Awesome

  • DeToNaToR

    They would better use GDDR5X instead of normal GDDR5 especially now when GDDR5X went mass production!
    http://videocardz.com/59839/micron-gddr5x-memory-enters-mass-production

    • Probably too expensive to fit it into the budget for a console. They are likely pushing the budget with going to faster GDDR5.

  • jesse

    The new 14nm FinFET APU consists out of eight x86 ‘Zen Lite’ LP cores at 2.1 GHz
    Umm what is a Zen lite core… I’m guessing this is the Zen console version lol?

  • TheDizz

    I was right after all, made no sense to go with Jaguar cores when you have the means to move over to newer cores on the newer process. The effort is already being made to do FE functional verification for the whole SoC might as well put in newer cores that actually make sense for the beefed up GPU.

  • RSene

    The “should be announced during Tokyo Game Show 2016” part, it’s info from a source or it’s just your own speculation?

    • PuiuCS

      mostly because it’s not going to be at E3.

  • Ashley Gann

    Sorry, but 8x Puma+ cores w/ a coherent fabric interconnect to a Polaris based GPU (2304 SP or smaller) makes a lot more sense than Zen cores imo 🙂

    • R Valencia

      AMD’s road map indicates Puma+ being replace by low power ZEN.

      • Ashley Gann

        I’d love that, but the timing and cost involved seem to make that very unlikely for PSneo. Hell, I’d love it if they sold an APU with the same specs!

        • R Valencia

          It depends on ZEN Lite LP’s die size.

          The work involved…
          Cut-down/modify existing FinFET ZEN for NEO
          or
          Shrink 28nm Jaguar into FinFET

          • Anukul

            shrinking jaguar is not happening.

      • AMD clearly states 14nm FinFET is either Zen or K12. Sony and Microsoft will keep the same CPU and GPU ISA.

  • Dus Shel

    I can’t belive there naming a new Playstation after Zen.

  • AS118

    If these PS 4 Neo rumors are true, I’m glad I waited to buy. A Zen-lite core at 2.1 ghz with Polaris GPU tech is very nice. I’m sure there’ll be a price premium involved as well, but if it’s not that bad, I’m willing to pay it.

    Plus, it’ll probably mean the old stuff gets a price cut too. Win/win imho.

    • SupZ Photography

      This PS4 Neo sounds tempting. One thing is for sure, and that is that I will upgrade the HDD with a hybrid or SSD.

  • disqus_GB8lUuziuG

    this is great news!

  • Eddie Battikha

    Thank You Sony, I can’t wait to get a 55 inch 4K HDR TV and PS4K.

  • chrisday85

    If this is even remotely close to the zen improvements we are talking an i5 type core with a Polaris 10 gpu that is about as powerful as a gtx 980 ti. That is actually competitive. Very competitive.

    About dang time.

  • Zen Lite? Are you sure? Even if the other sites are wrong about Jaguar cores, Zen does not look like it would be ready in time since all signs point to very late this year or early next year even for desktop and laptop Zen. I am sure AMD would release it for those before consoles got it, even if just barely. However, that could work out if rumours about the launch date are wrong and Neo’s release is further away.

    • Zen is October. So is Vega. My sources told me they’re not Jaguar cores – Jaguar is a 28nm planar transistor design. The new chip – and it’s just one – SoC or APU, depending on how you like to call it – is a 14nm FinFET. To redesign the core from planar to FinFET is a ‘pointless exercise’, you have to build the same chip all over again. Zen Light might be “Puma 2”, or something similar. I wrote what I know, which admittedly so is not too much – it’s still early days to leak something out of Japan.

      • Well, it would be a first for AMD if they gave console makers a new CPU. Though if it uses Polaris, they are already breaking the same precedence with the GPU so that could lend credibility to what your sources claim. And personally Polaris is the only GPU that makes sense as far as I can tell.

        Also, it makes sense from another perspective: Jaguar has proven to be a bottleneck to consoles but even Zen Lite would probably be much less so.

        • Actually, it would not be the first. Pre-AMD ATI had their GPU architecture debut in a console first, then in a PC. As far as current console architecture goes, they all went with FinFETs and 14nm process. There are no Jaguars or Pumas for FinFET technology. It is a complete redesign of an APU. It has to be backwards compatible with all the PS4 titles, but the pixel power is focused on rendering VR content for PlayStation VR HMD.

          As far as sources go, I wish…

          • That was a GPU. I said a first for the CPU. But I guess it is possible that that is changing to be more like pre-AMD ATI was with GPUs. Especially, since semi-custom is only going to grow and become a larger part of their business and thus getting an increasingly larger focus for resources and release dates.

  • Scott Stapp

    I think it’s great that they’re naming a console after Keanu Reeves.

  • 49ers fan

    According to wiiudaily.com, the Nintendo NX will be revealed at Tokyo Game Show in September… just saying.

    • MS

      And? Who cares?…..just saying.

  • askjiir

    According to all other sites, the CPU is Jaguar overclocked at 2,1 Ghz. Are you saying they will be using a more powerful processor?

    • As they article writer explained to a comment of mine, to do that would require a complete redesign of Jaguar since designing for FinFET transistors in very different to designing for planar transistors. If it was a simple node shrink they could easily keep using Jaguar (or its successor, Puma).

      But going from planar to FinFET requires a redesign. And does not make sense from either a financial point of view or a technological point of view to redesign an existing CPU and GPU. It makes much more sense to use processors designed for FinFET, ie, Zen and Polaris.

      This ends up not only with a much more powerful SoC, but it is cheaper to design that much more powerful SoC than to redesign the Jaguar based SoC as they can piggyback it onto already existing R&D for 14 nm FinFET. Even if AMD and Sony were in great financial situations it would make more sense to do things that way.

      • askjiir

        So they might not use the Jaguar after all?

        • That would be the most logical choice for Sony to take. They could go the other route, but it would be really stupid of them. A worse decision than using Cell was. Cell was misguided, not stupid. And even then it is clearer now in retrospect than it was at the time. But *completely* redesigning Jaguar for FinFET would be stupid and that is fully clear now not just in retrospect.

          • askjiir

            So they will not use the Jaguar?

          • I mean that not using Jaguar and instead using Zen would make more sense. Using Jaguar would be stupid.

          • askjiir

            If so that’s great news. More processing power, yes please.

          • I hope so too, especially since developers have said the CPU is the main bottleneck. An overclock of Jaguar would help a little, but with such a big GPU improvement, it may actually end up a bigger bottleneck once the extra GPU power is used more.

          • astrix_au

            Wouldn’t the fact it had 4K capability meant it had HEVC meaning polaris.

          • Well, 4k media. I wouldn’t put in stock in 4k gaming except *maybe* upscaled. But, I am inclined to believe it is Polaris for other reasons. Well, a customised Polaris GPU anyway. But if thew CPU is Jaguar (or even Puma) then I worry about it not being powerful enough to keep up with the GPU, even accounting for the GPU’s compute capabilities. But a low-end Zen would be powerful enough. And at Computex AMD spoke a little about Zen and curiously they were already talking about a 15W TDP Zen, suggesting that low-end Zen is further along than many realised.

          • astrix_au

            isn’t that what they said it will be upscaled to 4K obviously it won’t be 4K, but maybe using VSR.

          • Well, unless they render nnatively at 1440p I would not want upscaled 4k. 1080p tends to look “muddy” when upscaled so much. The problem is that the upscaling technology is having to *guess* where to put the pixels and in the case of upscaling 1080p to 4k, 3 times as many pixels are placed based on guesses than are precisely placed, which can be problematic.

          • Joe_HTH

            It’s not using a Zen according to every credible source out there. How many times does that need to be said. It doesn’t matter whether you think using Jaguar would be stupid. According to all info, it is in fact using a Jaguar. Sounds like wishful thinking on your part.

          • Jaguar still makes no sense from a commercial, performance, or engineering perspective. If they are using Jaguar that still doesn’t change. It just means they were idiotic about CPU design. Because it *IS* idiotic to spend hundreds of millions to *completely re-design* Jaguar to go from planar to FinFET when Zen would be cheaper to use yet perform vastly better. It is doubly stupid since Jaguar cannot keep up with a nearly 4.2 TFLOP GPU. My choice to assume VR World was correct and other sources were wrong has nothing to do with wishful thinking and everything to do with assuming Sony would be intelligent about the design not stupid.

            But you know what? All those other sources saying it is Jaguar proves *absolutely nothing* because they are unconfirmed rumours. It *not* information, it is pure rumour. The same goes for what VR World said. However, one is a much smarter move as I said above. And the number of sites saying it is irrelevant. Numbers don’t make things more or less likely to be accurate. Only connection to reality matters.

          • Eddie Battikha

            If u really think their using a Jaguar CPU then obviously u don’t know what’s going on and how things work.

      • Joe_HTH

        No it wouldn’t, because the Jaguar in the Xbox One is clocked faster than the Jaguar in the PS4. It didn’t require a complete redesign and neither would a clock boost of 500MHz. All data, including very credible sources, say it’s a Jaguar. There is no AMD CPU other than a Zen, and I doubt very seriously AMD designed a custom chip, nor would Sony have wanted to pay for a custom chip.

        • Did you read what I said? I never mentioned clock speeds. Clock speeds are irrelevant to my point. The problem is going from planar transistors to FinFET transistors. The two work in fundamentally different ways, so processors using one have to be designed in very different ways than transistors using the other. Even at the exact same clock speed and the exact same feature set a re-deisgn is necessary because of FinFET.

          I don’t expect AMD designed a custom chip. I assume they designed a *semi-custom* chip, which is exactly what happened for PS4 and XBO. It is exactly what the entire department of AMD that worked on those SoCs is about, ie, semi-custom. It is exactly what Neo and Scorpio will be using, ie, semi-custom. And I suspect when we see a Zen based SoC in consoles (if it is not these iterations it will almost certainly happen the iterations after them) it will be semi-custom versions of the CPU component of laptop Zen APU, hence the “Lite” in “Zen Lite”. The APUs from AMD have always been the “Lite” versions of their architectures, especially for laptops.

        • Eddie Battikha

          Learn how to read and understand what people are saying, PS4 NEO APU is Zen Lite CPU with Polaris GPU.

  • Hvd

    i dont think it is.unless its one cheap cut down version of it.maybe a rx 460.
    the rx 480 has 5.5 tflops and the ps4 neo has just over 4 tflops.

    it would have to be a real cut down polaris gpu.

  • Joe_HTH

    This is a silly article. They didn’t confirm anything. We know the Neo is using an upclocked Jaguar and a Polaris. The Jaguar is not a 14nm part. Only the Zen is a 14nm part.

    • MS

      Your comment makes no sense, at all! First you say that we know the Neo is using Jaguar but then you go on to say that no Jaguars come in a 14nm part, only Zen, which “we know” the Neo is 14nm. So which is it you numbnuck?

  • getagrip

    lol plaYSTATION PLEBS

    IL BE GETTING A NINTENDO NX

    • MS

      Fantard

  • sean moses

    So when you said: “neither Microsoft nor Sony were ready to pay for the cost of moving from a planar transistor (28nm) to a FinFET transistor design (14nm).”

    Your info was wrong (you could play semantics and say MS chose TSMC’s 16nm process instead!) . Microsoft clearly paid for, and went to the trouble of, doing it for the Xbox One S.