This site may earn affiliate commissions from the links on this page. Terms of apply.

Now that the fume has cleared and the hype has died down, Nvidia'due south RTX 2080 and 2080 Ti nonetheless remain challenging cards to evaluate. While every GPU launches with a mixture of current and forward-looking features, the RTX series leans into the idea of the future harder than most, which leaves it with comparatively trivial evidence with which to support its claims. Fortunately, our ability to test these GPUs in shipping titles isn't nearly so express.

Nvidia'southward Turing family unit currently consists of ii GPUs: The RTX 2080, a $700 retail / $800 Founder's Edition GPU, which replaces the GTX 1080 Ti in Nvidia's product stack and competes straight against that carte du jour, and the RTX 2080 Ti, a $i,200 GPU that'south not yet shipping. The RTX 2080'southward current aircraft price is hovering between $770 and $790, as of this writing, the least expensive cards are $789. The 2080's $700 launch price is $150 more than than the GTX 1080 back in 2022 (which is why we're now comparing it directly confronting the 1080 Ti), while the 2080 Ti's launch price has risen by $500 compared with the Nvidia GeForce GTX 1080 Ti.

Competitors

While we've had both the RTX 2080SEEAMAZON_ET_135 See Amazon ET commerce and 2080 Ti in-house for testing, we were unable to exam any of the DLSS or RTX demos that Nvidia has distributed, every bit the tests are not currently publicly bachelor. That's not a particular problem from our perspective, withal. Our tests and reviews accept e'er been heavily weighted towards evaluating the practical benefits of a purchase at the time the commodity is written as opposed to focusing on theoretical benefits that may one day materialize.

Turing-Feature

At the same time, even so, I desire to acknowledge that Nvidia has unambiguously positioned the RTX family as a fundamentally new type of GPU intended for new types of emerging workloads. These factors need to be taken into account in any evaluation of the GPU, and we're going to do then. But unpacking the value proposition of Nvidia'south RTX family every bit it concerns adjacent-generation games and the overall likely longevity of the family is a meaning plenty undertaking that information technology deserves to evaluated independently from the question of how the Turing RTX family unit performs in mod games. Let's look at that first.

Evaluating the Long-Term Ray-Tracing Potential of the RTX 2080, RTX 2080 Ti

Every GPU launch is an opportunity for a company to argue both that it delivers better performance in currently aircraft titles and a new set of features that will raise performance or visual quality in the futurity. Turing is far from the first GPU to await ahead to what the future might bring, merely it dedicates a substantial amount of its full die to enabling features that y'all can't currently use in games. Architecturally, Turing closely resembles Volta, with a new ready of processing cores Nvidia calls RT Cores. These RT cores perform ray-intersection tests critical to the implementation of Nvidia'south existent-time ray tracing technology. Meanwhile, the Tensor Cores that debuted in Volta have been tweaked and improved for Turing, with support for new INT8 and INT4 operating modes at 2x and 4x FP16 performance, respectively.

Outside of DLSS, it's non clear how Nvidia will utilise these features, and it hasn't disclosed as well much information about RTX either, though we've hit the available data previously. Simply just looking at the math on relative dice sizes and ability consumption tells us a lot: Nvidia has gone to the trouble to integrate its tensor cores and machine learning capabilities into consumer products considering it believes these new technologies could yield game improvements if properly utilized. It's willing to pay a die and functioning penalty to effort and hit those targets, and it's picked a launch moment when there'southward very trivial contest in the market in an try to constitute them early.

Information technology'south of import to draw a distinction betwixt what Nvidia is attempting to accomplish with RTX and DLSS — namely, the introduction of ray tracing and AI noise-processing features equally complementary technologies to rasterization or even replacements for rasterization over the long-term, and the question of whether the RTX 2080 and 2080 Ti represent a practiced value for consumers. The showtime question volition play out over the next v-10 years across a number of GPU architectures from multiple companies, while the second focuses on specific cards. I've been a fan of ray tracing engineering science for years and take covered it for ExtremeTech on several occasions, including profiling Nvidia'southward own iRay engineering. After PowerVR'due south attempt to button the tech in mobile came to naught, seeing it debut on desktops in the about future would be genuinely exciting.

In many ways, Turing reminds me of the G80 that powered GPUs like the GeForce 8800 GTX. Like the G80, Turing represents a new calculating model — a model that Nvidia took a substantial risk to build. When it debuted in 2006, the G80 was the largest GPU ever congenital, with 681 1000000 transistors and a then-enormous 480 mm sq dice. Nvidia's G70, launched in 2005, had been a 333 mm2 chip with 302 million transistors, for comparison. The G80 was the first PC GPU to utilize a unified shader compages and the first to support DirectX 10. It was the first programmable GPU — Nvidia launched CUDA in 2006 and CUDA 1.0 was supported by the G80. Few today would look back at G80 and declare that the chip was a mistake, despite the fact that it represented a fundamentally new arroyo to GPU design.

Putting Turing in Historical Context

Role of Turing's problem is the inevitable chicken-and-egg scenario that plays out whatsoever time a manufacturer introduces new capabilities. It takes developers a significant amount of time to add support for diverse features. By the time a capability has been widely adopted in-market, information technology's often been available for several years. Buying into a hardware platform at the beginning of its life wheel is oft a bad idea — as illustrated past the G80 itself. By examining how this history played out, we proceeds a better sense for whether information technology'due south a good idea to leap at the very outset of a major new technology introduction.

When the 8800 GTX came out, it blew the doors off every other GPU in DirectX 9, as shown in the slide below:

Its power efficiency and functioning per watt were similarly advantageous; the 8800 GTX was more twice as power-efficient equally its competitors and predecessors.

Now, at first, these advantages seemed probable to be well-distributed across the entire product family. Just in one case DirectX x and Windows Vista were both set up, this proved non to be the case. For these adjacent slides, a bit of explanation is in lodge: The 8800 GTX and 8800 GTS were based on the aforementioned G80 silicon and debuted at the same time, in 2006. The 8800 GT shown in the results below is based on the smaller, more-efficient G92 cadre that debuted almost a year afterwards the 8800 GTX. The launch price on the 8800 GT was $300, launch cost on the 8800 GTS was $400. Keep that in mind when you look at the results from Techspot's Crysis coverage below:

The gap between DX9 and DX10 functioning was hardly unique to Nvidia; AMD owners often were hammered as well. But the bespeak stands: Often, GPUs that performed quite well nether DirectX 9 staggered with DX10, to the bespeak that only the highest-end cards from either vendor could even use these features. And as the 8800 GT'due south functioning illustrates, a cheaper GPU that arrived a year later beat the snot out of its more-expensive cousin. The gap between the 8800 GTS and the 8800 GTX in DX9 and CoH is 1.44x.  In Crysis nether DX9, it's 1.39x.  In Crysis under DX10, it'south i.77x.

Once RTX is available, how well will the RTX 2080 match the RTX 2080 Ti's performance? How much will the RTX 2070 match the RTX 2080? Nosotros don't know. Nvidia hasn't said.

But it's impossible to await back at the DirectX ix to DX10 shift and identify any of the GPUs as a peculiarly good bargain if your goal was to play DX10 games at skilful frame rates. The G80 offered bully performance in DX9, but only the virtually-expensive fries were capable of handling DirectX ten at high detail levels. Furthermore, by the time DX10 games were widely in-market, G80 had been replaced by G92 or even GT200. Gamers who bought into the 8000 family at lower price points got screwed when their GPUs were generally incapable of DX10 at all. The overall impact of this was ameliorated past the fact that DX10 ultimately didn't stop up delivering much in the way of benefits — but information technology still serves as an important instance of how excellent performance in established APIs doesn't guarantee excellent functioning when new capabilities are switched on.

DirectX eleven

DirectX 11 and DX12 don't represent as large a break with previous GPUs as DX10 did, but there's withal some useful elements to each. In the instance of DX11, new GPUs that supported the API did a generally better task of offering solid functioning within its characteristic ready. But at the marketing level, despite the promised importance of new features like tessellation, the long-term affect on gaming was much smaller than the marketing implied information technology would be. Shots like these were frequently used to illustrate the difference between having tessellation enabled versus disabled:

In reality, the difference looked more like this:

Did DirectX 11 improve image quality over DirectX 9? Absolutely. But if you read the early literature on DX11 or paid attention to Nvidia'south marketing on topics like tessellation you lot'd exist forgiven for thinking the gap between DX9 and DX11 was going to exist much larger than it actually turned out to be. And one time once again, we saw existent improvements in API operation in the generations that followed the offset DX10 cards. G80 could run the API, simply G92 and GT200 ran information technology much more finer. The GTX 480 supported DirectX eleven, but it was the GTX 680 that really shone.

Mantle and DirectX 12

Mantle and DirectX12 offer united states of america the run a risk to await at this question from AMD's perspective rather than Nvidia's. When Mantle debuted in 2022, in that location were people who argued for the then-imminent superiority of GPUs like AMD'south R9 290 and R9 290X because the advent of low-latency APIs would supercharge these GPUs by the competition. Reality has non been kind to such assessments. V years after Mantle and three years after Windows x launched with DirectX 12 support, there are only a relative handful of low-latency API games in-market. Some of these practise indeed run meliorate on AMD hardware under DX12 as opposed to DX11, only if you bought a Hawaii in 2022 considering you lot thought Mantle and DirectX 12 would usher in a new wave of gaming that you wanted to be positioned to take reward of, you didn't really receive much in the way of benefits.

Some DirectX 12 and Vulkan games have taken specific reward of AMD GPUs' ability to perform asynchronous compute and picked upward some additional performance by doing so, only fifty-fifty the virtually optimistic expect at DirectX 12 performance ultimately shows that it only boosts Squad Ruby-red GPUs a moderate amount in a relatively small handful of titles. Given that some of this increase came courtesy of multi-threaded driver support beingness standard in DX12 (AMD didn't support this characteristic in DX11, while Nvidia did), we have to count it as an intervening variable as well.

DirectX 12 and Curtain, in other words, don't brand a case for buying a GPU right at the beginning of a product bike, either. Now, with RTX becoming part of the DirectX 12 specification, one can argue that we're finally seeing DX12 start to pull away from previous API versions and establishing itself as the only game in town if you lot want to create certain kinds of games. Simply information technology's taken three years for that to happen. And with AMD dominating consoles and the degree of simultaneous development in the manufacture, whatsoever major button to bring ray tracing into gaming equally more than than an occasional sponsored-game feature, will crave buy-in from AMD and possibly Intel if that company is serious near entering the graphics market.

Why Turing Will Never Justify Its Price Premium

With Turing, Nvidia has chosen to launch its new GPUs at a substantial price premium compared with older cards. The question is, are the price premiums worth paying? While this always depends on one'due south own fiscal position, our argument is that they are not.

The history of previous major applied science transitions suggests that it volition be the kickoff generation of GPUs that takes the heaviest touch on from enabling new features. Nvidia has historically offered new GPUs that speedily improved on their predecessors in then-new APIs and capabilities, including G92 over G80 and GTX 580 over GTX 480. The gap between 480 and 580 was less than eight months.

All available information suggests that the RTX 2080 Ti may be needed to maintain a 60fps frame charge per unit target in RTX-enabled games. Information technology'southward not articulate if the RTX 2080 or 2070 will be capable of this. The RTX button Nvidia is making is a heavy, multi-yr lift. The history of such transitions in computing suggests they happen but over three-v years and multiple production generations. The worst time to buy into these products, if your goal is to make best apply of the features they offer, is at the beginning of the capability ramp.

Nosotros too already know that 7nm GPUs are coming. While AMD's 7nm Vega is intended as a depression-volume part for the motorcar-learning market, we may well see 7nm GPUs in 2022. Turing could be a curt-term partial production replacement at the loftier end, akin to Maxwell 1 and the GTX 750 Ti. But even if this isn't the case, RTX and DLSS appear likely to be confined to the upper-cease of Nvidia'due south product roadmaps. Even if the RTX 2060 supports information technology, the price increases to the RTX 2070 and 2080 means that the RTX 2060 will be a ~$350 – $400 GPU, not a $200 – $300 GPU like the GTX 1060. Nosotros don't know how far downwardly their own stack Nvidia tin enable RTX or DLSS, but nobody thinks these features will be making their way to the mainstream market this twelvemonth.

Finally, there's the fact that for years, Nvidia and AMD have vanquish a relentless drum on higher resolutions. It makes a sure degree of sense for Nvidia to pin to address the 1080p market considering bluntly, there'south a whole hell of a lot more 1080p panels in the earth than anything else. Steam reports 62.06 percent of the market uses 1080p, compared with 3.59 percent at 1440p and 1.32 per centum at 4K. But it likewise means asking high-cease gamers to cull betwixt using the high-resolution and/or loftier-refresh-rate displays they may have paid a premium dollar for and request them to tolerate gameplay in the 40-60fps range at 1080p. In some cases, DLSS and RTX together may more than compensate for any lost resolution or speed, merely that'south going to inevitably depend on the size of the performance hit RTX carries also as the GPU and game in question. The fact that Nvidia has played its performance cards so very close to its breast on this ane isn't a great sign for kickoff-generation ray tracing as a whole. Put differently: If they could bear witness ray tracing at 60fps on the equivalent of an RTX 2060, with an RTX 2080 Ti breaking 100fps, they'd have done it already.

The RTX Family Isn't Priced for Adoption

The simply way for showtime-generation RTX cards to be worth what Nvidia thinks y'all should pay for them is if the new features are both widely used and a significant improvement. The simply way that'south ever going to happen is if developers (over and in a higher place the ones Nvidia partners with and pays to prefer the engineering) broil it in themselves. And the only way that happens is if consumers own the cards.

Bank check out the list of the most popular GPUs on Steam, keeping in mind that Pascal has been in-market place for more than two years, with the GTX 1080 TiSEEAMAZON_ET_135 See Amazon ET commerce available for most 18 months. Nosotros surveyed the Acme twenty cards beneath, corresponding to 59.61 per centum of the total graphics market:

Steam-GPUs

Very few people own a GPU in the RTX 2070 – 2080 Ti toll range. 2.69 per centum of the market has a 1080 (corresponding to the 2070), i.49 percent has a 1080 Ti (corresponding to the RTX 2080) and we don't know how many people volition buy $1,200 2080 Tis. But it'south not going to be tons. Nvidia'southward ability to move the market towards RTX and DLSS will be constrained by the number of people who own these cards.

Second, we've taken the Top 20 cards, isolated the Nvidia GPUs, and broken downward the results by compages family. To be articulate, these figures refer to each architecture's market place share within the Top xx cards, not the entire Steam database. But more than than two years afterward launch, during a time menstruum when Nvidia has dominated the gaming market, nosotros're nonetheless at 65 percentage Pascal, 31 percent Maxwell, and roughly 4 percent Kepler. Pascal hasn't swept the field clear of Maxwell and Kepler in over two years, and that's with a top-to-bottom gear up of solutions in-market. Turing volition ramp more than slowly. Its pricing and the express number of cards guarantee information technology. That won't alter with more SKUs unless those SKUs are besides RTX / DLSS capable, and nobody is taking bets on a 12nm RTX 2050 Ti that can handle ray tracing in 2022 at a $250 midrange price betoken.

Nvidia Isn't Really Trying to Build a New Standard (Withal)

I believe Nvidia when information technology talks about wanting to bulldoze gaming in a new direction and to utilize more of the silicon its built into its GPUs. I don't know how much additional value Nvidia tin can clasp from its AI and auto learning specialized silicon, but I completely understand why the company wants to try.

Image by Anandtech

But at the same time, permit'south acknowledge that launching RTX as a much-hyped characteristic only bachelor on a scattering of expensive cards is very much Nvidia's MO. Whether you view information technology as a good thing or a bad thing, Nvidia has a long-standing habit of attempting to monetize enthusiast features. Its G-Sync brandish engineering science offers no intrinsic advantage over FreeSync, yet to this day Nvidia insists on the fiction that you lot must purchase a monitor with special, Nvidia-branded certification and a correspondingly higher price in order to utilise it.

When Nvidia was pushing PhysX ten years ago, it talked up the idea of ownership higher-finish GPUs to support the capability or fifty-fifty using two Nvidia GPUs in a unmarried system for PhysX offloading. PhysX was very much positioned equally a high-finish visual feature that would add an incredible layer of depth and realism to gaming that others weren't going to get. When Nvidia decided to jump onboard the 3D craze, it created its ain sophisticated Nvidia 3D Vision system. Await dorsum at the features and capabilities Nvidia has historically built and it's clear that the company chooses to invest in creating high-end ecosystems that tie customers more tightly to Nvidia hardware and earn Nvidia more than revenue, rather than focusing on open standards that would benefit the community but earn itself less coin.

There's nothing wrong with a company building a marketplace for its own hardware past focusing on delivering a premium experience. I'm not implying that Nvidia has done anything ethically questionable by edifice a customs of enthusiasts willing to pay tiptop dollar for its own products. Simply there's also nothing incorrect with acknowledging that the same visitor busily engaged in upselling Turing has a history of this kind of behavior and a less-than-perfect track tape of delivering the benefits its marketing promises.

What About Robust Early Support?

One way AMD and Nvidia both try to allay fears most weak characteristic support from developers is by lining up devs to promise feature adoption from Day one. AMD did this with its Mantle launch, Nvidia is doing information technology now with RTX and DLSS. Simply for most people, lists similar this are worth much less than the sum of their parts. Here'due south Nvidia'southward list:

Nvidia-RTX-Games-Updated

Of the games confirmed to use but RTX, merely Battlefield V and Metro Exodus have major brand presence. The DLSS list is larger and more than interesting with five well-known titles (Ark, FFXV, Hitman, PUBG, Nosotros Happy Few), but DLSS also lacks some of the visual punch compared with real-time ray tracing. Only Shadow of the Tomb Raider will back up both and Mechwarrior 5: Mercenaries support both, equally far as AAA or well-known franchises are concerned.

Will this list aggrandize? I'chiliad sure it will. Only games have time to deliver and several titles on this listing aren't set to launch until 2022 already, assuming they aren't delayed. It's entirely possible that past the time many of these games are out, either Turing's prices will have come downwards thanks to increased contest or that Nvidia volition accept introduced new cards already. Merely at the price points Nvidia has chosen for its launch, it'southward clear the company is focused on taking profits out of the product line, not pushing them into the mass market to spark widespread adoption as quickly as possible. That's an entirely valid strategy, but it too ways the RTX ecosystem Nvidia wants to assemble will likely accept longer to form. The company hasn't done itself any favors in sure regards — y'all'll need GeForce Experience to use DLSS co-ordinate to Tom'due south Hardware, despite the fact that there's no objective reason whatsoever for Nvidia to receive any personal information from anyone who buys its video cards.

But fifty-fifty in the absolute all-time-case scenario, early on support is going to be dull to build. How much this matters to you lot will e'er be a matter of personal preference — if all you play is Battlefield V, and BF5'southward ray tracing implementation is amazing, it's entirely possible you'd experience the RTX 2080 Ti is a bargain at twice the price. Merely companies build lists of titles with broiled-in upcoming support precisely because they want you to focus on the number rather than the names. "21 games with upcoming support!" sounds much better than "21 games with upcoming support, of which 17 will ship, x volition be good, six will have personal appeal, and yous'll actually buy 2."

This is scarcely unique to Nvidia. Just it's notwithstanding a problem for anyone who wants to actually make use of these features today in more than a scattering of titles.

Test Platform

That brings us dorsum to the rest of today's games, and the most important question hither for anyone considering Nvidia's latest cards: How does the RTX 2080 and RTX 2080 Ti perform on current titles?For a launch this significant, we've revisited our existing suite of benchmarks and updated our hardware platform. Our updated testbed consists of a Cadre i7-8086K CPU on an Asus Prime Z370-A motherboard, 32GB of DDR4-3200 RAM, and Windows 10 1803 updated with all available patches and updates. The RTX and GTX GPUs were tested using Nvidia'south RTX 411.63 driver, while the AMD Vega 64 was tested using Adrenaline Edition 18.9.three. Nosotros've also expanded our tests somewhat to include a larger range of AA options in various titles and to test a few titles in multiple APIs.

A few notes before nosotros swoop in.

Nosotros've tested Ashes of the Singularity in both DirectX 12 and Vulkan. Vulkan performance is notably lower for both companies and no exclusive fullscreen mode is offered simply according to representatives at Oxide Games, it's upwardly to both AMD and Nvidia to improve their performance in that API when running Ashes. We've used a unlike graph style for these graphs because Ashes doesn't return a uncomplicated minimal frame rate metric.

Nosotros've likewise tested Full War: Warhammer II in both DirectX 11 and DirectX 12 modes. AMD Vega 64 owners (and possibly Polaris gamers as well) should use DirectX 12 in that title for optimal performance, simply Nvidia's performance in DX12 is quite poor. Our amass charts for 1080p, 1440p, and 4K employ the DirectX 12 results when calculating AMD'due south median level of operation and the DirectX 11 results when calculating Nvidia'south. DirectX 12 is used for Ashes of the Singularity results for both vendors.

Finally, in the graphs below, in all games except Ashes of the Singularity, the orange bars are minimal frame rates, the greenish confined are average frame rates. High minimal frame rates combined with high boilerplate frame rates are a very good thing; the closer the 2 numbers are, the more smooth and consistent overall play is likely to be.

Evaluating Overall RTX 2080, 2080 Ti Performance

We've decided to intermission out our conclusions a little differently than in the past. We've taken the geometric average of our overall results at each resolution and plotted the overall score of each GPU equally a percentage of the side by side-lowest GPU'south performance. We've besides examined the performance-per-dollar rate that yous're finer paying with each card by evaluating the cost of each GPU in terms of dollars spent per frame rate of performance. In this scenario, the higher functioning of high-end cards works in their favor by making them (relatively) amend deals as resolution increases. The gap between the Vega 64 and RTX 2080 Ti is smaller in relative terms at 4K than at 1080p as a event of this. Nonetheless, the RTX 2080 and 2080 Ti are markedly more expensive by this metric than previous generations.

When you consider the overall results, several patterns sally. While they're closely matched in performance, the GTX 1080 edges out Vega 64 in operation-per-dollar thanks to a lower base price tag and slightly higher results. Overall functioning scaling as a function of price isn't very proficient. We don't await the high-end market to offer strong results here, merely the best gains we get are from the RTX 2080 over and above the GTX 1080 Ti, where a 4-6 percent performance increase is available for a ~ane.1x increase in price. That'southward not terrible, only it'due south not particularly noteworthy, either. The 2080 Ti offers a pregnant level of performance improvement but the price increase outstrips the gains for all but the nigh well-heeled gamers.

Considering the RTX 2080 and RTX 2080 Ti are so dissimilar, we've written a dissimilar conclusion for each.

Should You Buy the RTX 2080?

The RTX 2080 is both a amend value than the RTX 2080 Ti and a less appealing GPU overall. The best we tin can say for it is this: The additional price premium information technology commands over the GTX 1080 Ti is roughly twice every bit much as the additional performance it offers over and above that GPU. This is objectively a bad bargain, but information technology'south no worse than the typical scaling you lot get from luxury GPUs in the $700+ price range — and that means we can understand why someone might opt to pick up an RTX 2080 instead of a GTX 1080 Ti. The ray tracing and DLSS capabilities the RTX 2080 offers could exist seen as justifying the ~ten percent additional price premium over the GTX 1080 Ti. All of this, of course, assumes you're in the market for a $700 GPU in the offset place.

The RTX 2080's biggest trouble is that it's a completely unexciting GPU. Evaluated in today'due south titles information technology offers marginally more than performance than the 1080 Ti, just, with rare exception, not enough and so that you'd observe. ExtremeTech acknowledges that the RTX 2080 is faster than the GTX 1080 Ti, merely we do non find its positioning or performance compelling and we do non particularly recommend it over and above the 1080 Ti.

Should Yous Purchase the RTX 2080 Ti?

The RTX 2080 Ti does offer impressive performance and absolutely moves the bar forwards. Only it does so with a mammoth cost increase that is completely unjustifiable. If coin is no object, the 2080 Ti is a great GPU. Only given that merely 1.49 percent of Steam users own a GTX 1080 Ti — and that GPU is $500 less than the RTX 2080 Ti — it's clear that money is an object for the vast bulk of gamers. And that changes things.

The GTX 1080 Ti is one.3x faster than the GTX 1080 at 4K and costs i.5x more than. The RTX 2080 Ti is 1.36x faster than the GTX 1080 Ti and costs 1.7x more. High stop of the market or non, in that location comes a betoken when a component hasn't done a smashing job justifying its price increase. And evaluated in currently shipping games, the RTX 2080 Ti hasn't. ExtremeTech readily acknowledges that the RTX 2080 Ti is the fastest GPU yous can purchase on the market. It's an impressive performer. Simply the premium Nvidia has slapped on this card just isn't justified past the operation it offers in shipping titles. It's not even articulate if it'll be defensible in RTX or DLSS gaming.

The single argument yous can brand for the RTX 2080 Ti that doesn't utilize to whatever other GPU in the stack is that it moves the needle on absolute gaming performance, pushing maximum speeds forrad in a fashion the RTX 2080 doesn't. And that's swell until yous realize that we're talking almost a GPU that costs $1,200. Well-nigh of the gaming rigs I've built for people haven't cost that much coin.

Nvidia didn't slap a $500 premium on this GPU to cover the cost of manufacturing it — information technology's a big GPU, only information technology's not big plenty to justify that much of a cost increase and TSMC'south 16/12nm node is at present quite mature and manufacturing costs will take come up down equally a result. Nvidia slapped a $500 premium on this GPU because AMD isn't currently competing at the height of the market and Intel won't have cards out the door until onetime in 2022. As many have pointed out, this is just smart economic science for Nvidia. But Jen-Hsun's demand for a solid gilt bathtub doesn't trump the need for Nvidia to justify the prices information technology charges for its own hardware. And information technology hasn't. Nvidia's overall gross margin for its entire business is sitting at 60 per centum; the company is scarcely pain for funds to drive all the development information technology'due south doing.

ExtremeTech again acknowledges that the RTX 2080 Ti is the fastest GPU on the market, but unless price is literally no object to you, nosotros don't recommend you lot buy ane. In fact, we explicitly recommend yous don't. While the performance is excellent, information technology's non worth the premium Nvidia wants to accuse for it in currently shipping titles. Buy this card, and you ship a message to Nvidia that $1,200 ought to exist the price for high-cease GPUs.

The RTX 2080 and 2080 Ti aren't bad GPUs, just they're both bad deals with no reasonably priced, reasonably performing cards lower in the stack to anchor the family unit. Consumers exterior the ultra-loftier-end volition be improve served by older cards from Nvidia's Pascal family. Turing does not offer sufficient advantages in current games to justify the cost premium Nvidia has slapped on it.

Will RTX'southward Architecture Evangelize in the Future?

This article is one of the longest pieces I've ever written for ExtremeTech and the longest single piece of work I've written in years. Merely I'll acknowledge I'm taking a dimmer view of Turing than my colleagues in the review customs. I wanted to lay out why.

If the RTX 2080 had come in at GeForce 1080 pricing and the RTX 2080 Ti had slapped $100 – $150 on the GTX 1080 Ti, I withal wouldn't be telling anyone to buy these cards expecting to trip the light fantastic toe the ray-traced mamba across the proverbial dance flooring for the next decade. Only there would at least exist a weak argument for some real-world performance gains at improved performance-per-dollar ratios and a lilliputian adjacent-gen crimson on top. With Nvidia'southward price increases factored into the equation, I can't recommend spending peak dollar to buy silicon that volition virtually certainly exist replaced by better-performing cards at lower prices and lower power consumption inside the next 12-18 months. Turing is the weakest generation-on-generation upgrade that Nvidia has ever shipped one time price increases are taken into account. The historical record offers no evidence to believe annihilation below the RTX 2080 Ti volition exist a apparent performer in ray-traced workloads over the long term.

ExtremeTech, therefore, does not recommend purchasing either the RTX 2080 or RTX 2080 Ti for their hoped-for operation in futurity games. Ray tracing may be the future of gaming, just that future isn't here yet. Mainstream support for these features will not make it on 12nm GPUs and will not be adopted widely in the mass market earlier the next generation of cards. And again — every dollar you driblet on $700 and $i,200 GPUs is a dollar that tells Nvidia it ought to be charging you more than for your adjacent graphics carte.

The RTX 2080 and 2080 Ti are the start of a new arroyo to rendering. Long-term, the technological approach they champion may be successful. But that doesn't make either of them a expert buy today, while the historical record suggests no ground-breaking GPU is a peculiarly good buy its kickoff time out of the gate. Ironically, that'due south entirely Nvidia'due south fault. If the company was less-good at immediately overtaking its own previous GPU architectures will new hardware that offers dramatically better performance and price/functioning ratios, I'd have less of a reason to have the stance I'thousand taking. But between the price increases and Nvidia's ain historical functioning, the smart motility here is to wait. Whether yous're waiting for sane pricing, 7nm, or but happy with your current GPU — that's up to you lot.

Now Read:Nvidia Shares Skid on Disappointing RTX GPU Launch, Nvidia Announces New Tesla T4 GPUs For Information Center Inferencing, and Nvidia Volition Proceed Pascal GPUs on Store Shelves Subsequently RTX Launches