ARC B570: Better than it needs to be

How Did We Get Here

My adventures with Intel Arc video cards begins shortly after their release. I bought an Arc A750 in the spring of 2023, and immediately found issues. The price was fair at around two hundred fifty USD, but performance on many titles was insufferable. Some games couldn’t use the Direct X 11 API at all, and although Vulkan worked on many titles, games that had that as a choice, wouldn’t let you choose it. World War Z, for example, had horrible performance, drivers were broken, and the Vulkan API couldn’t be chosen. Some of my thoughts on it are here.

Performance on titles that had Vulkan as the default were okay, and many Direct X 12 titles worked, but the price was too high. The only hope was that Intel would do what they promised and fix the issues. Otherwise, this card one step above a paperweight. Then came the drivers. An update here, one a few weeks later, and a major one later. It also didn’t stop with just one or two. Updates continued to come seemingly every week. A major release at around the year mark fixed DX11 issues including some of those on WWZ , mentioned earlier. It also cleaned up most DX 11 issues. This card was now decent. A follow up video is here.

The one thing the Alchemist cards going for them was the adoption of the AV1 video encoder. The second thing was platforms like YouTube allowed AV1 and the encoder on the Arc A750 was outstanding. In fact, the Intel AV1 encoder for all of it’s cards performed well, even the lower tier A380. AMD and NVidia were both behind, here. With a smaller file size, little quality loss in compression and fast rendering, small creators had a gem.

But What About the Arc B570?

Shortly after the two year mark for Alchemist came Battlemage. The B580 released for desktops and the first thought was driver performance. The first thought should have been if these would be available. No one seemed to have these cards except reviewers, who were actually positive. It was a stark contrast from the previous release, and a great sign for consumers. Two months later, and the B580 is still not available. Okay, technically it is, if you want to pay a one hundred percent mark up.

One thing that did become available, at least occasionally, was the B570. The A770’s little brother was the A750 and likewise for the B570. Similar to the A750, the performance might not be up to the more expensive card, but it was still good. In this case, good enough to beat the RTX 3060 in many benchmark tests. It was also slightly better in the video encoding mentioned earlier.

It performs well in both 1080p and 1440p, and the model I picked up runs extremely quiet. Temperatures were also outstanding with the two fan model Sparkle brand card never going above the mid-60’s Celsius. The RTX 3060 I tested it against has 12GB of video memory, where the B570 has ten, but the only game that the NVidia card beat it soundly in didn’t use more than half the available memory. I used the RTX 3060 because it’s the most popular card on Steam, so it’s a realistic comparison.

So, what now?

The Sparkle card is actually a very attractive card as well. It has a few curves and a nice blue color, with small amounts of accent lighting and a nice fan design. I loved the original reference design from intel, and was hesitant to buy this one, but the reference looks to only be available on the higher model which is harder to find than NVidia’s new 50 series. The A380 I have is a Sparkle Brand card, but the design of the B570 actually impressed me.

This card will soon go into an upgraded editing rig. The color scheme for that machine is blue and white, so the new motherboard and this card will match well. The AV1 encoder is a definite feature, and now I know the gaming performance is as well. I’m sure I’ll write more about this card, and probably compare it to any new card I get from the other two companies, so stay tuned, but in the meantime the video with benchmarks can be found here.

Using the Old Standard or a Shiny New Rival

How did we get here?

I guess the first thing I should do is explain a bit further. Specifically, we are talking about the RTX 3050 6GB version mentioned in a blog here, and the GTX 1660. Typically, a good test would be the current card versus the next model up from the previous generation. That would mean testing the RTX 3050 against the RTX2060, but there is an issue with that.

The first issue is that the new card is the cut down version of the standard RTX 3050 with roughly 75 percent of the memory graphics processing and speed. It probably deserves an entirely different name, but NVidia enjoys confusing the consumer. It’s not the first time they’ve create a different product with the same name. Nomenclature doesn’t count as one the issues, though. For this example, we will only be talking about the current 6GB version.

The second issue is that we aren’t actually comparing cards with like characteristics. The GTX 1660 lacks the ability to Ray Trace and to use NVidia’s upscaling DLSS process. We won’t be looking at Ray Tracing and AMD has provided an answer to performance comparison, by offering Fidelity FX. Still, both of these cards do have 6GB of video memory.

There is one, very important difference that may give the newer card an edge.

More Power

The GTX 1660 requires external power. Modern motherboards are designed to offer 75 watts of power though the PCI Express slot. Most modern video cards use that and need additional resources provided by an additional dedicated cable. That also means that typically, the power supply needs to be a bit stouter. The last fact tends to limit some older PCs from being good candidates for without extra work.

Finding older Optiplexes or ThinkStations is pretty easy these days with older office PCs selling for pennies on the original dollar. Working PCs can often be found for less than fifty US dollars. They are often good candidates for some easy upgrades, but a few are more difficult. Things like power supplies and front panel connectors are often proprietary, making upgrading the graphics card or changing the case more difficult.

This is also the case for the GTX 1660. It’s not that it uses a lot of power, actually about 20% less than similar cards from AMD, the RX 480 and 580, but it does need external power. One option is using adapter cables, but that can introduce heat and other issues, not to mention a fire hazard. Still, its an option and can be a good one, if done properly. The 1660, and its ‘Super’ and ‘Ti’ versions are great options, even five plus years after their release.

The RTX 3050 has no such limitation because of its seventy-watt power draw. This, and the size could make it ideal to give some of these older office machines a new life as a gaming PC. It still won’t fit in the single low-profile slot that the RX6400 will, but that’s a different comparison.

So, which is better?

As it turns out, both of these cards are very well matched. The GTX 1660 performs slightly better on older titles and E-sports, and the 3050 doing slightly better on newer titles. The full video of this comparison with benchmarks can be found here, but there is one very important thing to discuss; the price.

Used GTX 1660’s and the sibling models range from 90 to 110 US dollars, while the RTX 3050 6GB cost another 40-60 bucks. You can save that on the rest of the hardware that won’t need a bigger PSU or adapters, but what that means to you may come down to what is available when you try to buy parts. The GTX 1660 is more than a worthy opponent, but the RTX 3050 definitely deserves part of the conversation.

The one thing left to talk about is why this card takes on a name that already exists. There seems to be no other reason NVidia does this except to confuse the consumer, and they do it often. They tend not to differentiate between laptop and desktop models and even cards with different memory or die configurations sharing names. Some more recent, blatant examples are the GT 1030 with both DDR4 and DDR5 being available with almost no markings, the GTX 1060 3GB and 6GB models and RTX 3060 available in 8 and 12GB.

It’s certainly confusing, and I, like others, have no idea, why they would do it. I will tell you what isn’t confusing, though, the 3050 6Gb is a decent card that has a valid use case. I may take a lot of flak for writing that, but except for the price, it’s as good or better than many of the other options available, especially options from Intel and AMD.