The 5 worst Nvidia GPUs of all time

Nvidia has a strong pedigree for making great graphics cards. It was never really the underdog and its best GPUs have consistently eclipsed those of its rival AMD. But despite Nvidia’s penchant for innovation and technological advances, it’s released some vile cards that aren’t necessarily cursed by bad technology, but often by bad decision-making. Let’s recall some Nvidia GPUs that we might like to forget.

Geforce GTX 480

This is how it should be grilled

Hyins

Although Nvidia has been in business for over 20 years, the company has really only released one GPU that was truly terrible on a technological level, and it’s the GTX 480. Powered by the Fermi architecture, the GTX 480 (and the entire 400 series series) was plagued by a multitude of problems, which in turn allowed AMD to become the leading graphics chip manufacturer and almost overtake Nvidia in market share.

The 480’s biggest claim to fame (or disgrace) was its power consumption and heat. Tests by Anandtech found that a single GTX 480 consumed the same amount of power as dual GPU systems and could get as hot as 94°C in normal games, which was insane at the time. It was an unfortunate coincidence that the 480’s stock radiator looked like a grill, prompting critics to turn Nvidia’s slogan “The way it’s meant to be played” to “The way it’s meant to be grilled”.

To make matters worse, Fermi was late to the party by about 6 months since AMD’s HD 5000 series came out first. Sure, the 480 was the fastest graphics card with just one GPU die, but AMD’s HD 5870 had 90% of the performance without being a toaster. Also, AMD’s HD 5970 was faster with two GPU dies, and in 2010 CrossFire had much better support in games. Last but not least, the 480’s price of $500 was just too high to make it competitive.

Nvidia ignominiously killed the GTX 400 series just eight months later by launching the GTX 500 series, which was basically a fixed version of Fermi. The new GTX 580 was faster than the GTX 480, used less power and was the same price.

Geforce GTX 970

3.5 equals 4

When it first launched, the GTX 970 was actually very well received, much like other 900-series cards based on the legendary Maxwell architecture. It cost $329 and was as fast as AMD’s flagship R9 290X from 2013 while using significantly less power. In Anandtech’s opinion, it was a strong contender for the lowest-priced champion of the generation. So what made the 970 do so badly that it ended up on this list?

Well, a few months after the 970’s release, some new information about its specs has come to light. Even though the GPU had 4GB of GDDR5 VRAM, only 3.5GB of that was usable at full speed, with the remaining half GB running just barely faster than DDR3, the system memory that a GPU goes to when it runs out of VRAM stands. By all accounts, the 970 was a 3.5GB GPU, not a 4GB GPU, and this led to a lawsuit that Nvidia settled out of court, paying each 970 owner $30 each.

In reality, the performance impact of half a gigabyte less VRAM was virtually non-existent, according to Anandtech. Back then, most games that required more than 3.5GB of VRAM were just too intense, even for the GTX 980, which had the full 4GB of VRAM.

There are quite a few games these days where the 970 struggles with its sub-optimal memory configuration. But performance isn’t the point here; Ultimately, Nvidia more or less lied about what the GTX 970 had, and that’s unacceptable and really stains the legacy of an otherwise great card. Unfortunately, playing fast and loose with GPU specs is a habit, and Nvidia has struggled to break it ever since.

Geforce GTX 1060 3GB

Ceci n’est pas une 1060

Best graphics card for gaming

After the 970 debacle, Nvidia never attempted to make another GPU with a slow VRAM segment, making sure each card was advertised with the correct amount of memory. However, Nvidia found another specification that was easier to play around with: CUDA core count.

Before the 10 series, it was common to see GPUs with multiple (usually two) versions that differed in VRAM capacity, like the GTX 960 2GB and GTX 960 4GB. GPUs with more VRAM were just that; In most cases they didn’t even have memory bandwidth left. But that all changed with Nvidia’s 10 series, which introduced GPUs like the GTX 1060 3GB. On the surface it sounds like a GTX 1060 with half the normal 6GB, but there’s a catch: it also had fewer cores.

As a current product, the GTX 1060 3GB was passable, according to testers like Techspot and Guru3D, who didn’t even mind the reduced core count. But the 1060 3GB ushered in a flood of GPU variants that had both less VRAM and fewer cores, and frankly, that trend has only created confusion. The number of cores on the GPU is arguably what differentiates different GPU models, with VRAM being just a secondary performance factor.

The worst example of Nvidia performing this bait and switch would have been the RTX 4080 12GB, which should only have 78% of the cores of the RTX 4080 16GB, making it feel more like an RTX 4070 than anything else. However, the reaction to this was so fierce that Nvidia actually canceled the RTX 4080 12GB, which (un)happily means it will never be on this list.

Geforce RTX 2080

One step forward and two back

RTX2080
Riley Young/Digital Trends

With the GTX 10 series, Nvidia achieved total dominance in the GPU market; Cards like the GTX 1080 Ti and the GTX 1080 are easily some of Nvidia’s best GPUs of all time. Nvidia didn’t slow down either when its next-gen RTX 20 series introduced real-time ray tracing and AI-assisted resolution upscaling. The 20 series was much more technologically advanced than the 10 series, which was basically the 900 series on a better node.

In fact, Nvidia held its new tech up so high that it gave the RTX 20 series the price it felt it deserved, with the RTX 2080 priced at $800 and the RTX 2080 Ti at $1,200. Ray tracing and DLSS were the next big thing, so that would make up for it, Nvidia thought. Except that wasn’t obvious to anyone because there weren’t any games with ray tracing or DLSS on the day of release, and it would be months away. It wasn’t until RTX 30 cards came out that there were many games with support for these new features.

The RTX 2080 was a particularly poor 20-series GPU. It was roughly $100 more expensive than the GTX 1080 Ti, but according to our testing it had slightly less power; after all, the 2080 Ti could claim to be around 25% faster than the old flagship. Even when ray tracing and DLSS came into play, enabling ray tracing was so intense that it struggled to hit 60fps in most titles, while DLSS 1.0 just didn’t look very good. When DLSS 2 came out in early 2020, RTX 30 was just on the horizon.

Nvidia had overdone it and knew it. Just eight months after launching the 20 series, Nvidia released its RTX 20 Super GPUs, a look back at the GTX 500 series and how it patched the 400 series. The new Super variants of the 2060, 2070, and 2080 had more cores, better memory, and lower prices, somewhat addressing the issues of the original 20 series.

GeForce RTX3080 12GB

How to make a good GPU terrible

RTX 3080 graphics card on a pink background.
Jacob Roach / Digital Trends

So we’ve seen what happens when Nvidia takes a good GPU and reduces the VRAM and core countdown without changing the name, but what happens when it takes a good GPU and adds more VRAM and cores? Making a good GPU even faster sounds like a great idea! Well, in the case of the RTX 3080 12GB, this led to the development of arguably Nvidia’s most pointless GPU by any measure.

Compared to the original RTX 3080 10GB, the 3080 12GB wasn’t actually a huge upgrade. Like other Nvidia GPUs with more memory, it also had more cores, but only about 3% more. In our testing, we found that the 10GB and 12GB models performed almost identically, quite unlike the 1060 3GB was significantly slower than the 1060 6GB. To Nvidia’s credit, the 3080 12GB’s name was pretty accurate, a noticeable improvement over the 1060 3GB.

So what’s the problem in offering a new version of a GPU with more memory? Well, Nvidia released the 3080 12GB during the GPU shortage of 2020-2022, and of course it was sold at an absurdly high price of between $1,250 and $1,600. Meanwhile, 10GB variants cost $300 to $400 less, and since the memory upgrade didn’t seem to matter, it was obvious which card to buy.

Perhaps the most embarrassing thing for the 3080 12GB wasn’t its cheaper 10GB version, but rather the existence of the RTX 3080 Ti, which had the same memory size and bandwidth as the 3080 12GB. The thing is, it also had 14% more cores and therefore significantly higher performance. On test day, the 3080 Ti was cheaper, making the 3080 12GB pointless from literally every angle and just another card released during the shortage that made no sense at all.

The worst GPUs from Nvidia so far

To Nvidia’s credit, even most of its worst GPUs had something to offer: the 970 was good despite its memory, the 1060 3GB just had a bad name, and the RTX 2080 was only about $200 overpriced. Technically, Nvidia has hardly made any mistakes so far, and even the GTX 480 was at least the fastest graphics card with only one GPU die.

That being said, good tech can’t make up for bad business decisions like bad naming conventions and exorbitant pricing, and those are mistakes Nvidia repeats every year. Unfortunately, it doesn’t seem like either of these things are going away any time soon, as the RTX 4080 12GB is almost making it to market, while the RTX 4080 and RTX 4090, while excellent cards, are just way too expensive to buy to make sense.

It wasn’t hard to predict that Nvidia’s GPUs would only get more expensive, and I expect this trend will continue in the future. Nvidia’s second worst GPU isn’t let down by shady marketing or misleading branding or technological flaws, just by price alone. We’d be lucky if the RTX 4070 doesn’t cost more than AMD’s upcoming RX 7900 XTX.

Editor’s Recommendations






Leave a Reply

Your email address will not be published. Required fields are marked *