Using the Old Standard or a Shiny New Rival

How did we get here?

I guess the first thing I should do is explain a bit further. Specifically, we are talking about the RTX 3050 6GB version mentioned in a blog here, and the GTX 1660. Typically, a good test would be the current card versus the next model up from the previous generation. That would mean testing the RTX 3050 against the RTX2060, but there is an issue with that.

The first issue is that the new card is the cut down version of the standard RTX 3050 with roughly 75 percent of the memory graphics processing and speed. It probably deserves an entirely different name, but NVidia enjoys confusing the consumer. It’s not the first time they’ve create a different product with the same name. Nomenclature doesn’t count as one the issues, though. For this example, we will only be talking about the current 6GB version.

The second issue is that we aren’t actually comparing cards with like characteristics. The GTX 1660 lacks the ability to Ray Trace and to use NVidia’s upscaling DLSS process. We won’t be looking at Ray Tracing and AMD has provided an answer to performance comparison, by offering Fidelity FX. Still, both of these cards do have 6GB of video memory.

There is one, very important difference that may give the newer card an edge.

More Power

The GTX 1660 requires external power. Modern motherboards are designed to offer 75 watts of power though the PCI Express slot. Most modern video cards use that and need additional resources provided by an additional dedicated cable. That also means that typically, the power supply needs to be a bit stouter. The last fact tends to limit some older PCs from being good candidates for without extra work.

Finding older Optiplexes or ThinkStations is pretty easy these days with older office PCs selling for pennies on the original dollar. Working PCs can often be found for less than fifty US dollars. They are often good candidates for some easy upgrades, but a few are more difficult. Things like power supplies and front panel connectors are often proprietary, making upgrading the graphics card or changing the case more difficult.

This is also the case for the GTX 1660. It’s not that it uses a lot of power, actually about 20% less than similar cards from AMD, the RX 480 and 580, but it does need external power. One option is using adapter cables, but that can introduce heat and other issues, not to mention a fire hazard. Still, its an option and can be a good one, if done properly. The 1660, and its ‘Super’ and ‘Ti’ versions are great options, even five plus years after their release.

The RTX 3050 has no such limitation because of its seventy-watt power draw. This, and the size could make it ideal to give some of these older office machines a new life as a gaming PC. It still won’t fit in the single low-profile slot that the RX6400 will, but that’s a different comparison.

So, which is better?

As it turns out, both of these cards are very well matched. The GTX 1660 performs slightly better on older titles and E-sports, and the 3050 doing slightly better on newer titles. The full video of this comparison with benchmarks can be found here, but there is one very important thing to discuss; the price.

Used GTX 1660’s and the sibling models range from 90 to 110 US dollars, while the RTX 3050 6GB cost another 40-60 bucks. You can save that on the rest of the hardware that won’t need a bigger PSU or adapters, but what that means to you may come down to what is available when you try to buy parts. The GTX 1660 is more than a worthy opponent, but the RTX 3050 definitely deserves part of the conversation.

The one thing left to talk about is why this card takes on a name that already exists. There seems to be no other reason NVidia does this except to confuse the consumer, and they do it often. They tend not to differentiate between laptop and desktop models and even cards with different memory or die configurations sharing names. Some more recent, blatant examples are the GT 1030 with both DDR4 and DDR5 being available with almost no markings, the GTX 1060 3GB and 6GB models and RTX 3060 available in 8 and 12GB.

It’s certainly confusing, and I, like others, have no idea, why they would do it. I will tell you what isn’t confusing, though, the 3050 6Gb is a decent card that has a valid use case. I may take a lot of flak for writing that, but except for the price, it’s as good or better than many of the other options available, especially options from Intel and AMD.

The GPU is the Bottleneck? Again? Really?

How did we get here?

Recently, I tested the newly acquired E3 1270v3 Xeon in the Dell Optiplex as a possible replacement. I found it wasn’t an upgrade because of the GPU. Fair enough. I knew the RX6400 wasn’t a great card, but it’s what fit and it worked.

I then chose to get some parts and put the Xeon in a different case on a Dell motherboard. Perfect. So far, so good. The original plan was to compare the new benchmark numbers to the old to show how much more room we had to grow. As it turns out, it was quite a bit, and with a better Graphics card, it now turned in figures like newer hardware.

Well, that didn’t turn out the way I wanted. I expected the set up testing 1440P and 1080P on high, medium and low quality to show the Xeon struggling. It didn’t. In fact it was for the most part within margin of error of the newer CPUs. That wasn’t going to make for an informative video at all.

The Testing

So, I ditched the idea of the comparison with the older i5 and concentrated on testing against the 10105F and 11400F. Not brand new, but recent enough to compare. The 10105F is a four core, eight thread i3 from tenth Generation intel, and the 11400 is a six core, eight thread from 11th Gen. Both of these should be more than a match, so I tested with the mid range RX480 GPU, which was out around the same time as this Xeon. Obviously, the newer CPU’s have an advantage, right? Well, no.

As I mentioned the Xeon stayed within margin or error using the mid range GPU, so what could I do? I dropped the resolution. Instead of 1440P and 1080P, I dropped the settings down to test 1080P Low, 900P Low and 720P Low. Resolutions that the 11th gen chip has no business thinking about, because it would normally run with a better GPU.

Very much to my surprise, the 11400F actually lost to BOTH four core chips in some games. What? Seriously? That can’t be right.

But it was. on multiple games in repeatable tests. It traded blows with the 10th gen CPU in some games, but in others it actually performed the worst. Now, understandably, the six core CPU has no business thinking about running games at 720P and it shows, but that doesn’t solve my original problem. How do I get a fair test between the three processors?

Need a different GPU

Quite simply, I need to be able to test all three of these at a higher resolution, so I’m going to have to get use a better graphics card. The RX 6600XT is the easiest one for me to access and test with, so it will be next. To that point, seeing how close the two four core chips are, I will probably just compare the Xeon with the 11400F. I have records of a fair difference in the 10th gen and 11th gen using the 6600XT, so if I have to, I’ll include all three, but that should give us a better idea of how good this Xeon actually is.

I’m curious, just how well the older E3 1270 holds up to a far more recent product. This should be fun. The video will be linked here.

back to the Blog page