NVIDIA Pascal vs AMD Polaris – news & predictions

We have been holding off making this article (which had been tempting to start earlier on this year) due to the amount of speculation and conjecture regarding the forthcoming latest GPU architectures from NVIDIA (Pascal) and AMD (Polaris 10 and 11).

However, with NVIDIA’s unveil of the GTX 1080 and 1070 at 2am BST on 07/05/2016, the time has come to start tracking all the various pieces of information (confirmed or otherwise) in one area for your reading pleasure.

But before we start, let’s have a recap of where we are and what has lead up to this exciting point in the history of PC graphics cards…


The long-in-the-tooth 28nm manufacturing process…


The launch of the now legendary Radeon HD 7970 heralded the first GPU to use the 28nm manufacturing node.  If you can believe it, this graphics card debuted in 2011 and we have been stuck on the manufacturing node for 5 years – an unprecedented long time in terms of graphics card industry which usually represents one of the most innovative and rapidly changing aspects of the modern PC.


The legendary Radeon HD 7970 – the world’s first 28nm GPU

One of the reasons we have been stuck on this manufacturing node is due to the ill-fated 20nm planar technology which was due for release in 2014 which was deemed unsuitable for power requirements of high-end discrete GPUs.

Both NVIDIA and AMD decided to skip this manufacturing node and focus on working with what they had:


  • Mainstream / High-End – AMD choose to consolidate their 2013 GCN 1.1 investment by tweaking the Hawaii core (used on the Radeon R9 290/290X) and produce the Grenada core (used on the Radeon R9 390/390X) – a slightly improved performance / improved power consumption of the latter core
Radeon R9 390X

The Radeon R9 390X – a tweaked R9 290X but largely the same GPU

  • Enthusiast – to tackle the threat imposed by NVIDIA GeForce GTX 980 Ti AMD produced the GCN 1.2 Fiji core utilised in the Radeon R9 Fury/Fury X which was paired with the world’s first use of High Bandwidth Memory (HBM) offering competitive performance at 1440p and 4k resolutions
AMD R9 Fury X

The innovative R9 Fury X – the world’s first GPU to use HBM memory


  • Mainstream / High-End – NVIDIA revolutionised their power consumption with the release of their Maxwell architecture (first debuted on the GeForce GTX 750/750Ti) to produce the incredibly popular GeForce GTX 970/980 GPUs offering superb performance but with significantly lower power consumption (and more overclocking headroom) compared to their AMD counterparts.  The GeForce GTX 970 has been so popular that it took the top spot for DirectX 11-capable GPUs (including integrated) with a share of nearly 5% of the entire install base in the December 2015 Steam hardware survey
GeForce GTX 970

GeForce GTX 970 – incredibly popular amongst gamers

  • Enthusiast – NVIDIA’s ‘full’ Maxwell offering came in the form of the GeForce Titan X – the world’s fastest single GPU which is available today.  Paired 12GB of GDDR5 memory, the Titan X is a truly luxury card designed for ‘money is no object’ purchases and those looking for a card suited for an entry-level workstation.  The GeForce GTX 980 Ti, on the other hand, offers the majority of the Titan X’s performance at a much reduced price-tag and has been very popular with enthusiasts using 1440p and 4k gaming resolutions

The NVIDIA Titan X – the world’s fastest single GPU card comes at a high price


Enough with the history lesson!  What exactly is exciting about this year’s releases?


Moving forwards to 2016, AMD and NVIDIA have been focusing their efforts on moving on to the more advanced FinFET technology which provides 3D / non-planar transistors allowing the continuation of Moore’s Law which would have otherwise been impossible with traditional planar transistor approaches.

AMD have partnered with Samsung / Globalfoundaries to utilise their 14nm FinFET technology for their Polaris 10 / 11 based GPUs.  NVIDIA, on the other hand, have allied with the Taiwan Semiconductor Manufacturing Company (TSMC) and their 16nm ’16FF+’ (FinFET Plus) node.

More information about 14nm vs 16nm FinFET technologies can be read here, however, both techniques offer significant reductions in heat and power consumption whilst at the same time allowing for increased performance through higher clock speeds.  In addition the choice of AMD and NVIDIA partnering with different fabrication companies should in theory mean that part supplies at launch should be improved compared to previous releases where both companies had used TSMC.

This year’s GPUs represent truly ‘next gen’ products which make both for 4k resolutions and VR obtainable from a single card within ‘mainstream’ price brackets.

The following sections will be written in a ‘blog’ style capturing my own excitement / views / predicts as well as snippets of information which will be highlighted as either Confirmed, Rumour or TerminatorUK’s prediction (take the latter with a large handful of salt!) where appropriate depending on the source of the information.


….and in the Green corner – NVIDIA Pascal GPUs




Staying up to 2am BST just to watch a live stream of a new graphics card release??….am I crazy?… do I need my head examined?  No – that’s just how we roll at RageQuitters!

Despite the crazy UK time when most of the nation would have been asleep, I felt compelled to stay up and watch the special GeForce Live Stream Event which was held at 6pm PST on 06/05/2016 (2am BST on 07/05/2016) and, boy, I’m glad I did.

NVIDIA CEO and Co-Founder, Jen-Hsun Huang, took to the stage and teased us within the first few opening minutes with the presentation topics.

This was definitely one for ‘reaction’ type of video but I was spare you of that because that’s pretty cliché… (and no one wants to see me drunk with fatigue on camera at that time of the morning) but when I saw that one of the topics on the presentation was ‘a new king’, I didn’t think “has Elvis made a return?”, no I knew then we were in for something a bit special (I nearly weed a little in fact).

We were made to wait whilst we learned about NVIDIA’s Ansel (in-game 3D photography software), VRWorks Audio (positional sound for VR), as well as the benefits of a unified x86 platform between PC and Next-Gen consoles (PS4 / XBox One) and the graphical quality of modern titles.

However, it wasn’t long until we learnt what ‘the new king’ was:

Confirmed – the rumours and leaks were true – NVIDIA announced the GeForce GTX 1080 – a monster of a GPU which was beyond anyone’s wildest dreams:

GTX 1080

The new king – the GeForce GTX 1080

Excellent looking specifications, the presence of using Micron’s new GDDR5X memory has been confirmed providing a 50% increase in memory bandwidth over GDDR5 – a very good start.

Jen-Hsun Huang explained that the flagship GPU was only made possible through substantial ‘craftsmanship’ and a massive increase in performance per watt:

NVIDIA Craftsmanship

NVIDIA’s definition of ‘Craftsmanship’ taking the Maxwell architecture and making it even more energy efficient…


GTX 1080 vs Titan X

…x2 the performance and x3 the efficiency of a Titan X?!! Oh wait, see the ‘relative VR gaming performance’ on the Y-axis? Hmm…

However what came next I wasn’t expecting….

Holy s**t - faster than x2 GTX 980's in SLI!! :-O

Holy s**t – faster than x2 GTX 980’s in SLI!! :-O

Yes, you heard it folks – faster than x2 GTX 980’s in SLI!  That’s a significant increase in performance over the current fastest single GPU king – the Titan X – by around 20% at least and at a fraction of the price; very impressive indeed.

NVIDIA went on to show a live demo of Epic Game’s Paragon showing the rendering of photo-realistic scenes at 60fps whilst maintaining a cool 67ºC : 67 degrees Celsius! What kind of voodoo magic is this?!

2.1Ghz….at 67 degrees Celsius! What kind of voodoo magic is this?!

Notice that the clock rate is a blistering 2.1Ghz on air – these kind of clock rates have been unheard of in the GPU world and provide hard evidence that the investment in the 16nm FinFET manufacturing process has definitely paid off.

Whether this is a cherry-picked “founder’s edition” card or not only time will tell, however, this is very impressive none-the-less.

Jen-Hsun Huang went on to say that the cards would have ‘massive overclockability’ which either means that even more headroom might be available for the enthusiast or simply that every GTX 1080 will easily reach 2Ghz+ without breaking a sweat.

Either way, this is great news for all you overclockers out there and I’m sure it will inspire another generation of custom water-cooling loops to get the very most out of the card.

The GTX 1080 was also shown in a demo running the 2016 reboot of ID Software’s Doom.  Initially the 1080p demo (which was using the open-source Vulkan API) was capped at 60fps but the developers unlocked the framerate to show the card pushing a staggering 120-200fps:

Confirmed – the GeForce GTX 1080 will be launching on May 27th 2016.

But, wait, there’s more!!

If the announcement of the GTX 1080 wasn’t already enough, Jen-Hsun Huang also revealed to us the more high-volume GPU with a more ‘realistic’ price-point for most users.  From the rumours you may have already guessed but this was as follows:

Confirmed – NVIDIA also announced the GeForce GTX 1070

GTX 1070

Faster than a Titan X, using less energy and costing only $379?! Take my money dammit!

The green team’s more ‘mainstream’ part will use regular GDDR5 memory and will prove very popular with gamers looking for the ultimate price / performance ratio.

As with the GTX 1080, the GTX 1070 card will be releasing with a ‘base’ price which will represent the 3rd-party vendor (EVGA, MSI, Gigabyte etc…) custom boards / cooling solutions and the slightly more expensive ‘founders’ edition which comes direct from NVIDIA featuring the aluminium alloy body, advanced vapour-chamber design, a heatsink with a high fin density and a ‘blower’ style fan to exhaust hot air outside the back of case (TerminatorUK’s prediction – most likely within a good acoustic envelope due to the lower 150w TDB and already cool running of the GTX 1080).

Confirmed – the GeForce GTX 1070 will be launching on June 10th 2016.




WCCF Tech revealed benchmarks which have been posted showing the GTX1080 running Ashes of The Singularity.

This is a futuristic RTS dominated in DirectX 12 by AMD thanks to Graphics Core Next (GCN)’s excellent ability to deal with DX12’s asynchronous compute capabilities which the NVIDIA Maxwell cards have been struggling with (which showed very minor gains in DX12 performance v.s. massive leaps for AMD due to the lack of hardware acceleration).

Fortunately the benchmarks are looking good for NVIDIA and the Pascal silicon this time in DirectX 12 with regards async compute – there appears to be significant gains vs. the Maxwell architecture which should bode well for other DX12 titles as well.

Whether this is purely down to the ‘brute force’ of the card or that NVIDIA have been able to fix DirectX 12 performance with Pascal it is too early to tell, however, the results are encouraging.


11/05/16 have published an article showing the first custom GeForce GTX 1080 from Galax:


Gakax GeForce GTX 1080

Creepy face….


Gakax GeForce GTX 1080 - 2

Pretty plain – looks like a cheap and cheerful founder’s edition cooler


Galax GeForce GTX 1080 - 3

No backplate = No sale for me…


This looks like it t might be a contender for the cheapest GeForce GTX 1080 that will be released!




Two pieces of news for the green teams section today.

Firstly, Fudzilla, has posted information regarding the GeForce GTX 1080‘s new SLI HB (high-bandwidth) bridge.

The new SLI HB bridge looks the part

The new SLI HB bridge looks the part

The SLI HB bridge is available in adjacent, 1 space and 2 spaces designs

The SLI HB bridge is available in adjacent, 1 space and 2 spaces designs


However, it appears the that the new bridge will only be capable of 2-way SLI (hat offs to anyone with enough spare cash to buy more than one GTX 1080 anyway).

Rumour – 3-way SLI might be possible with legacy bridges but this hasn’t been confirmed

TerminatorUK’s prediction – if I was to guess, I would say there would be minimal difference between the legacy and new ‘high bandwidth’ bridge versions and I’m sure someone will author an article to show that the difference is minor to none making the above a bit of a non-issue providing the legacy bridges do still work for users wanting to do 3-way SLI with x3 GeForce GTX 1080’s


The second piece of news is that WCCF Tech has shown Zotac’s customer cooler GeForce GTX 1080:

Looking hawt!! The custom LED lighting will have modders happy to buy this GeForce GTX 1080

Looking hawt!! The custom LED lighting will have modders happy to buy this GeForce GTX 1080

Zotac have managed to rig an excellent-looking custom LED lighting system in the edge of the backplate of their cards – very cool for modders and those with a side-window on their case to show off their new NVIDIA GeForce GTX 1080.

Their new Primer Gamer Force (PGF) edition will even allow customisation of the LED lighting via their Firestorm software utility.


13/05/16 have released some incredible 3DMark benchmarks including overclocking for the GeForce GTX 1080:

The GeForce GTX 1080 - the new king and then some.  This feels like the 8800 GTX all over again; truely next gen graphics

The GeForce GTX 1080 – the new king and then some. This feels like the 8800 GTX all over again; truely next gen graphics


TerminatorUK’s prediction – if these results are indeed true, we are looking at a release akin to the legendary NVIDIA GeForce 8800 GTX all over again.  The overclocking headroom on the card is simply amazing and destroying their current top-end Titan X by 50%.  I cannot wait to see what happens when enthusiasts get these under water (or better cooling) – 2.5Ghz anyone?!


…and in the Red corner – AMD Polaris 10 / 11 GPUs




If you thought the writing is on the wall for AMD….think again.  AMD’s Polaris 10 / 11 activity stretches back even further than NVIDIA’s recent burst of activity with Pascal, however, it has gone rather quiet from the red team’s camp at the moment – you could say that AMD are “holding their cards close to their chest” (sorry!).

So what do we know so far?

AMD Polaris will is based on Samsung 14nm FinFET manufacturing node and primarily focuses on dramatically improving the “performance per watt” ratio of AMD Radeon R9 GPUs.

In fact, AMD claims the performance per watt has improved by as much as x2.5 over previous 28nm Radeon R9 GPUs:

Polaris - x2.5 performance per watt

Polaris – x2.5 performance per watt compared to 28nm Radeon R9 GPUs!

Confirmed – we also know that Polaris 10 with represent the ‘mainstream’ desktop parts whilst Polaris 11 will be targeting the lower-end desktop parts and mobile GPU segments of the market (see WCCF’s article here for more details).

AMD Polaris / Vega / Navi Roadmap

AMD Polaris / Vega / Navi Roadmap

Going back to January – CES 2016 – AMD quieted demonstrated a Polaris GPU (Rumour – it is now believed to be the lower end Polaris 11 GPU) in an Intel Core i5 system going head-to-head with a identical system that had instead been equipped with an NVIDIA GeForce GTX 950.

A Gamers Nexus news article here has the details, however, this demo was essentially designed to show the performance-per-watt advantage of the Polaris architecture.  The two systems were playing Star Wars Battlefront locked at 60fps (presumably to keep the framerate and power draw consistent) with a kill-a-watt meter recording the total power draw of each system.

The demonstration showed that the Polaris-equipped system used just 85 watts under load vs 148 watts for the NVIDIA GTX 950 equipped system:

TerminatorUK’s prediction – This is at least some evidence that Polaris has an excellent low power draw and presumably low heat generated as a result.  If NVIDIA’s 16nm clockspeeds are anything to go by, this should able mean that Polaris has a good chance to hit high clock speeds with plenty of headroom for overclocking available.

But what about actual performance of Polaris?

Unfortunately this is an area shrouded in conjecture and speculation at the moment and nothing has been confirmed to date that has substantiated evidence behind it.

Firstly we know it is unlikely for Polaris to be competing head-to-head with the GTX 1080 performance (or even GTX 1070 for that matter) due to the following statement from AMD Corporate VP Roy Taylor (taken from the Arstechnica interview here):

“The reason Polaris is a big deal, is because I believe we will be able to grow that TAM [total addressable market] significantly,” said Taylor. “I don’t think Nvidia is going to do anything to increase the TAM, because according to everything we’ve seen around Pascal, it’s a high-end part. I don’t know what the price is gonna be, but let’s say it’s as low as £500/$600 and as high as £800/$1000. That price range is not going to expand the TAM for VR. We’re going on the record right now to say Polaris will expand the TAM. Full stop.”

“If you look at the total install base of a Radeon 290, or a GTX 970, or above, it’s 7.5 million units. But the issue is that if a publisher wants to sell a £40/$50 game, that’s not a big enough market to justify that yet. We’ve got to prime the pumps, which means somebody has got to start writing cheques to big games publishers. Or we’ve got to increase the install TAM.”

However, the million dollar question is, what do AMD now constitute as ‘mainstream’?  Are we talking about previous generation R9 270X / 280X class of products or has the revolutionary performance-per-watt enhancements increased that to include the R9 290/290X at a lower-than-ever price-point?

Rumour – In April, Fudzilla reported that Polaris 10 may performance “close to NVIDIA GeForce GTX 980 Ti” levels (which would suggest a R9 490/490X class of GPU) which sounds promising particularly if the heat, power consumption, noise and cost are significantly lower.

Rumour – However, there have been neigh-slayer reports in early May saying that the performance might be very much less.  For example Fuzilla reported that Polaris 10 might only perform to Radeon R9 390X levels which would be a disaster in terms of AMD’s credibility to compete in the high-end GPU arena verses NVIDIA’s high-end Pascal counterparts, however, this may well be referring to an R9 480X class of product.

In terms of cost, Polaris 10 is due to target the $349 or lower segment and will be launching at the end of June / early May.

TerminatorUK’s prediction – whilst it is likely that AMD will not be able to beat the GTX 1080 in a head-to-head performance contest, it is very likely that Polaris 10 will be extremely competitive on the price/performance ratio, total cost, heat / power / noise fronts.

Whilst the vast majority of information we have seen on AMD’s Polaris 10 and 11 GPUs is either rumour or conjecture and largely unconfirmed at the moment, we have to assume that AMD has an ace up their sleeve somewhere.

There are leaked specifications (report by WCCF Tech here) which suggest that Polaris 10 is clocked at a mere 800mhz.  For a 14nm part this is either incredibly conservative (keeping power usage deliberately under a certain threshold for product placing / marketing and therefore allowing overclockers to transform the product into something completely different ala the Radeon HD 5970 in that past) or, more likely, represents an early test sample.

Therefore the ‘performance predictions’ we have seen already could be completely out of whack and incorrect.

Purely based off price I would predict one of the following two scenarios will happen:

Scenario 1:

  • Radeon R9 490 / 490X is released with GeForce GTX 980 Ti performance at a $349 price-point
  • Radeon R9 480X is released with Radeon R9 390X performance at a $250-299 price-point


Scenario 2:

AMD really have been caught with there pants down in terms of the GTX 1070’s aggressive $379 price / performance ratio and only have a ‘480X’ class product available.

If the NVIDIA GTX 1070 turely does offer ‘TitanX beating’ performance, this would force AMD to slash prices of the 480X to remain competitive on the price / performance front and reap in the ‘high volume’ sales until they can prepare an interim ‘490X’ class product as stop-gap between now and the full ‘Vega’ high-end GPUs with HBM2 memory that are designed to supersede the Fury X lines.




It now appears that we won’t have long until we see AMD laying their cards on the table (there I go again – apologies), WCCF Tech has revealed that a press conference and grand unveiling (similar to NVIDIA’s recent event) is being planned between 26th – 29th May in Macau, Asia which is in close proximity (date / location) to Computex where the cards were meant to be revealed.




Rumour – according to, AMD may be planning to launch ‘Vega’ (Polaris’s successor that is due for Q1 2017) as early as October this year.

According to the comments in the same article, the company responsible for the development of HBM2 memory may have this ready for September this year which makes the rumour feasible.

TerminatorUK’s prediction – this could be bad news for Polaris and be a clear early indicator that this is a knee-jerk reaction from AMD that it won’t be launching a product that is able to compete with the GTX 1070 or 1080.


12/05/16 have posted an interesting table showing a pair of ‘Vega’ parts named as ‘Vega 10’ and ‘Vega 11’ and the various AMD vs NVIDIA product placements:

AMD Polaris and Vega vs NVIDIA Pascal offerings

AMD Polaris and Vega vs NVIDIA Pascal offerings


TerminatorUK’s prediction – I would personally take this table with a large handful of salt.

Firstly, I would doubt GP106 products (GeForce GTX 1060 and 1050) would be using high-end GDDR5X memory.

Secondary, one would like to hope that the top-end Polaris 10 GPU would at least be competitive with the GeForce GTX 1070 considering a similar proximity in price bracket.

Thirdly, to my knowledge, there has been no announcement (or rumour for that matter) showing any sort of division in the ‘Vega’ product line (i.e. no reference to a ’10’ and ’11’ denominator like with Polaris).


Stay tuned for further updates as they unfold!

No comments

Leave a Reply

Receive the latest news to your mailbox!