Is the fastest GPU ALWAYS the best?

 


This is the world's fastest gaming GPU as of yesterday today Nvidia launches this the GeForce RTX 4090 a massive improvement over its predecessor in almost every way so why aren't I more excited the fact that I could you know buy a cheap used car for what Nvidia is asking for this aside there are some other problems. Two years ago the GeForce RTX 3090 launched at an eye-watering 1500 US Dollars and NVidia’s justification for this was that it was a Titan class GPU now of course we all know now that that was BS as Nvidia launched the RTX 3090 TI just six months ago in March of 2022 at two grand with a very modest spec  ump for the price and still no Titan class floating Point compute functionality both cards are available at around a thousand dollars or so today thanks to the crash and GPU demand and competition from AMD is actually coming in a little bit less than that the RTX 4090 that launches at sixteen hundred dollars U.S What do you get for the price of three PS5 gaming consoles it's not a Titan class GPU and it has the same amount of VRAM as the previous gen RTX 3090 but that's where the similarities end not only is the memory as fast as what you'd find on an RTX 3090 TI you're getting over 50 percent more C cores that are each clocked at over 35 percent higher I mean say what you want but that alone would be a substantial upgrade for a mere seven percent increase in MSRP which is half the rate of inflation since 2020. what a steal is what I would say if it weren't for the fact that it's honestly it's still the price of an entire mid-range gaming PC on its own so what else does this bring to the table Nvidia claims that each of their Cuda cores is beefed up compared to the outgoing ampere architecture in almost all respects with clean performance improvements of up to two times what the 30 series could achieve that's thanks in large part to nearly double the L1 cache and a substantial change in the core layout itself and also to the use of TSMC's new N4 process that allows the total die area to be nearly 150 millimeters squared smaller than its predecessor of course we need to test NVidia’s claims so we cut the lamps to fire up our shiny new socket am5 bench with a fresh install of Windows 11 22h1 and go to town with testing why 22h1 apparently 22 H2 introduced some issues that Nvidia is not going to have ironed out by the time this airs but for now let's turn our attention to the main event games and right out of the gate okay no these numbers aren't right turns out stability issues.

Their bench messed up the settings and enabled Fidelity FX upscaling our bad it's still really good though that's over 60 FPS most of the time in Native 4K we're also looking at minimum frame rates Beyond 120 at 4K in Forza Horizon 5 compared to sub 90 of the 3090 TI and it is incredibly stable Assassin's Creed Valhalla’s frame rate also shut up by about 50 percent on the 4090 enabling 4K 120 FPS gameplay without any compromises remember this is just the rasterization performance what we thought Nvidia was trying to hide from us by showing off their RT performance in the Press materials Far Cry 6 and other Ubisoft title doesn't seem quite the same Improvement however at about 30 percent or so across the board it's not bad like not at all but I'm just ruined after seeing those other results improvements continue to become more modest and Shadow of the Tomb Raider where we're likely starting to get CPU Limited at 4K I'll let that sink in for a moment maybe wash it down with a big Swig from our changes 64 ounce water bottle from ltdstore.com we see exactly that problem again when we run CS go where for some bizarre reason the situation is flipped on its head the RTX 3090 TI outperforms the 4090 throughout multiple runs it's especially surprising because the GPU core clocks and the load remained High I mean if you've got an explanation I'd love to see your take in the comments dropping the resolution down to 1440p it seems like we're becoming even more CPU bound but critically the minimum frame rates in Forza Horizon 5 are massively improved making for a smoother overall experience interestingly CS go actually manages to return to normalcy at 1440p so I don't know maybe we've got a driver bug at 4K or something now do you still remember that cyberpunk results yeah okay so check this out proportionally we're looking at an even bigger performance Improvement compared to traditional rendering at nearly double the 3090 TI although we're not quite able to pull off 60 FPS at 4K without DLSS and when we turn Ray tracing on and the much older shadow of the Tomb Raider the 4090 is capable of nearly doubling the minimum FPS of the 3090 TI that brings this title from Mostly 60 FPS or more at 4K to a buttery smooth 100 to 120 FPS all without DLSS with DLSS and performance mode cyberpunk manages close to 100 FPS in minimum frame rates enough to be quite smooth with g-sync enabled so long as you can handle the image quality shadow of the Tomb Raider and DLSS quality mode is just ridiculous where the 3090 TI can't quite reach 90 FPS and 5 lows the 4090 is fast enough to comfortably Drive 144hz 4k monitor now you might be wondering about the new DLSS 3.0 NVidia’s AI frame generation technology we're wondering too unfortunately all of our cards even the third party ones that we're not allowed too show you today all of them crashed a lot like a lot a lot even across multiple benches as a result the labs weren't able to properly test DLSS 3.0 and we'd like more Ray tracing results than we got we'll have all of that missing data in a follow-up so stay tuned but okay what if you're not a big gamer well you're in luck because this thing is also a productivity beast in blender it gets over double the samples per minute in both the monster and jump shop scenes and just under double in the older classroom scene that's a substantial Time Savings and might be worth it on its own if you're a 3D artist similarly our 4K Da Vinci resolve export finished nearly a minute faster a difference of roughly 25 percent something that that'll also add up over time.

Especially for timelines with a lot of rendered graphical effects spec view perf meanwhile shows massive gains across the board with the most substantial improvements in 3ds Max Maya medical and SolidWorks at nearly double the score while Creo only saw modest performance Improvement of around 10 percent still this is like looking at a completely different class of GPU not a typical generational improvement with all these performance gains maybe Nvidia had to price these at sixteen hundred dollars if they had any hope at all of selling through all of their mining Surplus 3090s and that's not everything either another big part of productivity that many companies Intel and apple included are taking very seriously is video encoding and a video is out to prove that they're not sleeping on the job not only do you get the same high-quality encoder that first debuted on the RTX 20 series as we saw in our DaVinci Resolve test but you get two of them and they're both capable of av1 is the new codec that's likely to take over for live and on-demand streaming on sites like YouTube and twitch and while they can produce significantly better video quality at the same bit rates it's significantly more time consuming and difficult to encode unless you have Hardware dedicated for it Intel Arc launched with it as an headline feature and it's a safe bet that it'll become more important as time goes on unfortunately we don't have the time right now to do a proper test of it for the review so again stay tuned for a future video comparing av1 encoders what I can tell you today is that it's nearly as fast as NVidia’s existing h.264 and h.265 encoders which is pretty impressive of course all of these capabilities come with some drawbacks the first of which being the power draw in order to reach the power Target that Nvidia sent for it the RTX 4090 universally requires an ATX 3.0 connector and comes with adapters for it in the Box as the specs indicated our RTX 4090 draws nearly as much power under gaming load as the RTX 3090 TI though it did stay closer to 425 Watts then it's rated 450. this is a sharp contrast against the RTX 3090 and 6950 XT which consistently both pulled nearly 100 Watts less this does not bode well for an eventual RTX 490 TI  at least the power Targets on the 4080 series cards are lower though we don't have any of those to test today when we hit the RTX 4090 with a more demanding load via MSI combustor power consumption skyrockets to the red line for both the RTX 4090 and the 3090 TI though curiously about halfway through it kind of dips down to about 440 which is kind of strange meanwhile the RTX 3090 once again doesn't cross that 350 watt threshold and neither did the 6950xt with great power of course comes great thermal output capacity and the RTX 4090 is no exception with thermals while gaming tracking roughly in line with the power consumption its massive three and a half slot cooler keeps the hot spot at the 80 degree Mark while gaming placing it squarely between the RTX 3090 and the 6950 XT despite the power draw a testament to NVidia's cooler design core Clocks Were obviously way higher than any previous gen card and crucially they remain just as stable throughout the run with about 2.6 to 2.7 gigahertz when we look again at our combustor results though the hot spot temperature crosses the 80 degree threshold along with the RTX 3090 TI and we again see that slight dip halfway into the run while the 3090 sits below 70 degrees core clocks end up significantly lower at around 2.25 gigahertz for the 4090 with the 3090 TI throttling down to below its less power hungry sibling our AMD card meanwhile sort of did a heartbeat pattern of boosting and throttling that could result in uneven performance noting that these tests were performed inside of a Corsair 5000d airflow with a 360 millimeter rad up top and three 120 millimeter fans drawing air up front so it's not like we were starving the cards for oxygen or feeding them hot air from the CPU in fact in spite of all the airflow we gave them the RTX 4090 and 3090 TI both caused internal case air temperatures to hit highs of 39 to 40 degrees with significantly higher minimum internal temperatures than the RTX 3090 all at an ambient temperature of around 21 degrees that means that if your case can just barely handle a 30-90 you won't be able to manage a 40 90. sorry small form factor aficionados this card is not for you after all that you might look at the spec sheet and have some lingering questions like why isn't Nvidia supporting PCI Express Gen 5 and where's DisplayPort 2.0 and the answer to those questions both of them whether you like it or not is that Nvidia doesn't think you need them on a sixteen hundred dollar graphics card you smell that yeah that's the distinct scent of opium while it's true that a GPU running a whole 16 Lanes of PCI Express Gen 5 probably won't be super useful today a GPU running 8 Lanes of Gen 5 certainly would be especially for those who also want lots of NVME storage a good bet for a card of this class remember the faster the PCI Express link the fewer the lanes you need for the same bandwidth requirements Nvidia should know this for DisplayPort 2.0.


The official line is DisplayPort 1.0 already supports 8K at 60hz and the consumer displays won't need more for a while which is pretty valid while it's true that DisplayPort 1.4 only supports up to 120 hertz at full unadulterated resolution can go up to 268 Hertz with no chroma subsampling using display stream compression which is purported to be visually lossless that's pretty good considering displays that can refresh faster than that aren't very common still I would argue that this is a sub-optimal experience on such premium Hardware 240 hertz 4K displays exist today and will have DisplayPort 2.0 support soon Arc already supports it and rdna3 has been confirmed to support DisplayPort 2.0 since May Nvidia is clearly trying to save a book again on a GPU that costs as much as a series X PS5 a switch and a steam deck combined or maybe these cards were just ready and waiting in a warehouse for longer than we realized it's incredibly amusing to me that the first GPU that might actually be capable of running 8K gaming without asterisks is being handled in such a lackadaisical fashion by Team Green the days of generational performance per dollar gains are over they say Moore's Law is dead they say and yet the competition doesn't seem to think so while it's true that nobody can touch the 4090s mammoth performance right now as we saw with the RTX 3090 that won't last forever and that's where things get weird as it exists right now while the RTX 4090 chugs as much power as an RTX 3090ti it is a massive upgrade over both that card and the vanilla 3090 in nearly every respect well in excess of its price increase for Content creators this is a no-brainer but I cannot in good conscience recommend Gamers with more dollars than cents go out and purchase a piece of Hardware that is incapable of driving the also very expensive displays that might actually be able to take advantage of it especially when a less expensive and less power-hungry card perhaps even NVidia’s own RTX 4080s can do the same thing

Post a Comment (0)
Previous Post Next Post