Home Can a sub-$100 graphics card get the job done?
Reviews

Can a sub-$100 graphics card get the job done?

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

We have not, I must admit, been great fans of sub-$100 graphics cards here at TR. Yes, we’ve reviewed them pretty faithfully over the years, but as I said in our review of the Radeon HD 2400 series: “If you’re spending less than [$100] on graphics, you’re getting the sort of performance you deserve, regardless of which brand of GPU you pick.” In other words, cheaping out will get you lousy frame rates and spotty gameplay. It wasn’t just spite to say so; it was definitely true.

But what if you could cheap out and get more than you deserve for it? What if, through the magic of technological progress, dropping 80 bucks on a video card could get you a GPU that will slice through the latest games with relative ease? What if it could help decode HD video streams perfectly, even on a slow CPU? If such a beast existed, should you consider spending more, or would it just be future-proofing and fluff?

Exhibit A in our quest for knowledge is the brand-spanking-new Radeon HD 4670 graphics card, which threatens to tilt our assessment of the market on its ear. The 4670 inherits its GPU DNA from the Radeon HD 4800 series, which crams a tremendous amount of graphics power into a relatively small chip. AMD has scaled down this same basic design to fit into a budget-class GPU, and in doing so, it has brought unprecedented levels of graphics power to spendthrifts everywhere. To put the 4670’s graphics power into perspective, this $79 card has twice the shader power and three times the texturing capacity of the most capable game console, the Xbox 360 (assuming the spotty info on game consoles I found out there is correct.) If you’re more of a PC-oriented person, consider this: the 4670 has roughly equivalent shader power and over twice the texturing capacity of the Radeon HD 2900 XT, the first DirectX 10-class Radeon, a high-end card which debuted at under $400 18 months ago. And the 4670’s architecture is arguably more efficient.

There are mitigating factors here, of course. The biggest one is the 4670’s relatively anemic memory bandwidth, which is under a third of the 2900 XT’s. But the trends are favorable for cheapskates, for a variety of reasons. Better compression, smarter caching, and the proliferation of programmable shaders may mean memory bandwidth is at less of a premium, for instance. Not only that, but AMD’s competition over at Nvidia has responded to the 4670 by adding another cheap video card to its portfolio, as well. The affordable entries in these two firms’ product portfolios stretch from about 60 bucks to 170 bucks, with multiple increments in between. Even if the dirtiest of dirt-cheap video cards won’t cut the mustard, surely something in there will. The questions is: How little can you get away with spending? Let’s take a look.

The Radeon HD 4670 steps up
The Radeon HD 4670 doesn’t look like much at first glance. In fact, it looks like pretty much any other low-end graphics card.

However, fitting that profile isn’t a bad thing at all, really. A card like this one will easily go into just about any PC, maybe even that cheapo HP that you picked up at Costco without realizing its built-in graphics sucked harder than a Dyson. The board itself is just over 6.5″ inches long, and it’s content to draw power solely from the PCI Express slot—no auxiliary power connection needed. AMD rates the 4670’s peak power draw rather vaguely at “under 75W,” the max a PCIe slot can supply, but still not terribly much. Even most cheap power supplies should be able to keep this puppy fed.

Lurking beneath that modest cooler is an RV730 GPU. If you’ll permit me to geek out a little bit, I’ll give you its specs. Like its big brother RV770, the RV730 is a DirectX 10.1-capable graphics processor with a unified shader architecture and a full suite of modern features. The RV730 chip is quite a bit smaller than its older sibling, though. Manufactured by TSMC on a 55nm process node, the RV730 has an estimated 514 million transistors stuffed into an area of 145 mm². In the RV730, AMD has cut down the RV770 design by halving the number of shader execution units per SIMD partition from 16 to eight and by reducing the number of SIMD partitions from 10 to eight. What’s left are 64 superscalar execution units, each of which has five ALUs. Multiply that out, and you have 320 ALUs or stream processors (SPs), as AMD likes to call them.

As I’ve said, that’s quite a bit of shader power, with just as many SPs as the Radeon HD 2900 XT, though the RV730’s SPs should be more efficient and have a few new capabilities, including DirectX 10.1 support. AMD has made one concession to the RV730’s budget status by removing native hardware support for double-precision floating-point math, a feature really only used by non-graphics applications tapping into AMD’s stream computing initiative. The rest of the compute and data sharing provisions built into the RV770 remain intact in the RV730, though.

The outlook gets even rosier for the Radeon HD 4670 when we consider texturing capacity, a weakness for prior Radeons but a strength here. Because this architecture aligns texture units with SIMD partitions, the RV730 has eight texture units, each of which is capable of sampling and filtering four texels per clock. That’s 32 texels per clock from a low-end GPU, not far at all from the 40 texels/clock of the Radeon HD 4850 and 4870.

The RV730 has only two render back-ends, each of which can write four pixels per clock to the frame buffer. Yet those render back-ends are quite a bit more capable than the ones in the Radeon HD 2000 and 3000 series, with twice the throughput for multisampled antialiasing, 64-bit color formats, and depth/stencil only rendering. In practical terms, the RV730 should be even more capable, relatively speaking, since the render back-ends in those older Radeon HD 2000- and 3000-series GPUs had to rely on shaders to help with some of the antialiasing work. The RV730 does not. The two render back-ends each sit next to a 64-bit memory controller, giving the RV730 an aggregate 128-bit path to memory. That’s half what you’ll get in a $149 video card, but twice what you might expect from a $79 one.

Wow, so I really geeked out there. Sorry about that.

Back on planet Earth, the Radeon HD 4670 will come in two versions. Both will have a 750MHz GPU and shader core, but they’ll differ in memory size and speed. The first version will have 512MB of GDDR3 memory clocked at 1GHz, for an effective 2GT/s. This is the version we have in Damage Labs for testing, and it’s probably the more sensible of the two. The second will have a full gigabyte of GDDR3 memory at a lower 900MHz clock and 1.8GT/s data rate. Either one should set you back just a penny shy of 80 bucks, according to AMD, and indeed, there’s a 512MB MSI card selling for exactly that price at Newegg right now.

(I guess, technically, I should say it’s a “512 MiB” card, but I’d rather claw my eye out with a fork.)

The cards come with a couple of dual-link DVI outputs and a TV-out port. Our sample came with a dongle to convert the TV-out port to component video and another to convert a DVI port to HDMI.

The other Radeons in the race
If $79.99 is too rich for your blood, you may be interested in another RV730-derived graphics card, the Radeon HD 4650. We had hoped to include one in this comparison and tried to make arrangements to receive one for testing, but unfortunately, it hasn’t made it here yet. Not that it likely matters much. The 4650 is much less powerful than the 4670 due to lower core (600MHz) and memory (500MHz of GDDR2) clock speeds. It will save you ten bucks, though. since they’re available for $69.99. The 4650’s main appeal to PC enthusiasts may be for use in home theater PCs, since this card draws under 60W max and some versions may offer support for DisplayPort audio, which is a new feature in the RV730. Of course, Radeon HD cards have long supported audio over HDMI, as well.

Say you’re willing to really open up the piggy bank and move upmarket a little bit in order to get more performance than the 4600 series can offer. What’s next? Well, there are quite a few slightly older video cards hanging around in the market that once cost nearly 200 bucks but now hover not far from $100. These cards are last year’s stars, but today they’re selling at a generous discount.

A perfect example of such a beast is this Radeon HD 3850 from Diamond. Although the 3850 started out as a cheaper alternative to the Radeon HD 3870, the two products have essentially merged over time. 3850s added larger, dual-slot coolers, went to 512MB of GDDR3 of memory, and gained 3870-like clock speeds. Meanwhile, many 3870s have shed their expensive GDDR4 memory and resorted back to cheaper GDDR3 memory, which is faster clock for clock. This Diamond card is a pretty good example of the breed overall, with a 725MHz GPU clock and 512MB of GDDR3 memory clocked at 900MHz. It’s selling for $119.99 at Newegg, along with a $20 mail-in rebate, taking the net price down to either $99.99 or $119.99 plus 20 bucks worth of shattered dreams, depending on the whims of the rebate gods.

The 3850 may be a little bit older, but the premium you pay for this card over the 4670 will net you a higher-end product with a 256-bit memory bus. In fact, the 3850 has nearly twice the pixel fill rate and memory bandwidth of the 4670. Unlike the 4670, you will need an auxiliary PCIe power lead for this card, but its requirements are still quite modest.

We could dilly-dally around with 3870 cards that feature slightly higher GPU core and memory clocks for a little bit more money, but it’s difficult to see the point when you have AMD’s own Radeon HD 4850 hanging out there for not much more.

Take, for example, this Asus Radeon HD 4850. Not only is it a cornucopia of graphical goodness, with roughly twice the memory bandwidth and shader power of the Radeon HD 4670, but it has an historically implausible archer chick surrounded by rose petals on it. Just looking at it, I feel calm and secure, though slightly aroused. Those feelings are only heightened by its $169.99 list price and $30 mail-in rebate. If the rebate works out, we’re talking 140 bucks net for an incredibly powerful graphics card, calling into question the whole rationale for this article because, well, why buy anything else? I dunno, but perhaps Nvidia has an answer.

The GeForce side of the equation
Truth be told, Nvidia has many answers for the low-end video card market—perhaps too many. Those answers start at under $60 with the GeForce 9500 GT, most directly a competitor for the Radeon HD 4650. Our example of the 9500 GT has one big thing going for it.

Yep, that’s a passive cooler. Zotac’s ZONE edition of the 9500 GT comes completely fanless, with an HMDI adapter and no need for an auxiliary power connection. It also has 512MB of relatively quick GDDR3 memory running at 800MHz (or 1600MT/s). This is a new product we’ve not yet found for sale online, but Zotac does expect something of a premium for its passivity: the suggested retail price is $79.99. I can see paying that for something this well-suited to an HTPC build. Happily for the penny pinchers among us, though, you can grab a fan-cooled Zotac 9500 GT with the same 550MHz core clock and slower GDDR2 memory for $69.99 at Newegg, along with a $15 mail-in rebate, taking the pretend price down to $54.99. That’s pretend cheap!

The GPU that powers the GeForce 9500 GT is known as G96, and you can see it pictured above. The G96 quietly debuted along with the 9500 GT a little while back, but this is the first time it’s made it into Damage Labs. The G96 has the same basic capabilities and functional blocks as the G84 chip that first saw duty in the GeForce 8600 series of graphics cards. That includes two thread processing clusters for a total of 32 stream processors and 16 pixels per clock of texture filtering capacity, along with two ROPs partitions (what AMD calls render back ends) for a total of eight pixels per clock of output and a 128-bit path to memory.

The G96 is quite a bit smaller than the G84, though, thanks to a process shrink from 80nm to 65nm. In fact, with only about 314 million transistors, the G96 is smaller and less complex than the RV730. By my shaky measurements, this chip is roughly 121 mm². I believe we have the 65nm version of the G96 here, but Nvidia plans to transition the G96 to a 55nm fab process without renaming it, so some G96s may be even smaller.

As the G9x designation suggests, the G96 does have some improvements over its G8x predecessor, including support for PCIe 2.0, better shader instruction scheduling, improved color compression, DisplayPort support, and a couple of new PureVideo features—dynamic contrast enhancement and dual-stream video decoding for picture-in-picture commentary on Blu-ray discs.

Feature-wise, then, the G96 is pretty capable. But despite its similar size and 128-bit memory interface, the G96 is a much less potent GPU than the RV730 in terms of both shader power and texture filtering throughput. That is, perhaps, why Nvidia has chosen an old product with a new name to counter the Radeon HD 4670.

Here’s the GeForce 9600 GSO. This product was previously known as the GeForce 8800 GS, but it has been dusted off and renamed as part of the GeForce 9 series. Confusingly, the 9600 GSO isn’t based on the G94 GPU that GeForce 9600 GT cards are. Instead, it’s driven by the same G92 graphics processor that’s inside everything from the GeForce 8800 GT to the GeForce 9800 GTX, only here it’s been chopped down to six thread processing clusters and three ROP partitions. The net result is 96 stream processors, 48 texels per clock of filtering power, and a 192-bit memory interface. Strange, but true.

EVGA sells the card pictured above with a 555MHz core clock, 1350MHz shaders, and 384MB of GDDR3 memory clocked at 800MHz. Nvidia would like to position this product directly opposite the Radeon HD 4670, and in terms of basic capabilities, it’s quite close. But they’re playing pricing games to get there. You can buy the 9600 GSO at Newegg for $99.99, and it comes with a preposterous $50 rebate.

Hmmmmm. That’s over half the value of the product. What do you think they’re expecting the redemption rate to be on that one?

Nowhere in our price search engine can you buy the 9600 GSO for less than $99.99 straight up, and many places are offering a rebate of only $20. All of this seems dicey to me, but if you’re willing to take the risk of getting fleeced by a rebate company, I suppose the 9600 GSO is potentially competitive with the 4670. It will, however, require case clearance for a 10″-long card and an auxiliary PCIe power connection.

If you’re going to make those sorts of accommodations, you might well do better to drop 125 bucks straight up for a GeForce 9600 GT. That’s what the BFG Tech 9600 GT OCX pictured above costs, and it comes complete with a swanky “ThermoIntelligence” custom cooler and much higher-than-stock clock speeds of 725MHz core, 1850MHz shaders, and 972MHz memory. Since it’s based on the smaller, narrower G94 GPU, the 9600 GT doesn’t have quite as much shader or texturing power as the 9600 GSO, but it has vastly more memory bandwidth.

If you’re not yet bewildered by your choices, then what the heck, gaze upon this GeForce 9800 GT card from Palit. The 9800 GT is simply a renamed version of the venerable GeForce 8800 GT, complete with a set of core and memory clocks comparable to a bone-stock 8800 GT. As the model numbers indicate, this card is a little upscale compared to the 9600 GT, both in terms of price and performance. The card you see pictured above from Palit has an exceedingly quiet, shrouded custom cooler and 1GB of GDDR3 memory. For these things, you’ll pay a little more; this card costs $169.99 at the ‘egg and has a $20 rebate attached. However, you can get virtually the same thing with 512MB of RAM and no shroud for $129.99, too, which would seem to make the 9600 GT we mentioned above pretty much superfluous.

The fine gradations of Nvidia’s product lineup don’t end there, either. The final card that fits within our basic price parameters is the GeForce 9800 GTX+. The “plus” is there to designate that this is the 55nm version of the G92 chip, and the GTX+ is intended to do battle with the formidable Radeon HD 4850. Again, rather than price the GTX+ straightforwardly against the 4850, though, Nvidia has elected to create a byzantine pricing infrastructure that would make even a phone company jealous. You can find this card listed on Newegg at $199.99. Err, excuse me, that’s $199.99. When you click to put it into your shopping cart, the price then shows up as $189.99. If you were to buy it, you’d then be entered into the great rebate lottery, in which winners will be awarded $30 each. Potential bottom line: $159.99, which is less than the list price of that Asus Radeon HD 4850, but more than the after-rebate net cost.

Phew. So those are the cards we’re considering. We’ve thrown a lot of specs at you, but don’t be daunted. We’re going to offer some direct comparisons in terms of theoretical capacities and then test performance, so you can see exactly how these various choices compare.

Test notes
So look, we’ve done the typical review site thing and compared this range of graphics cards using a test rig based on a very high end processor, the Intel Core 2 Extreme QX9650. Why would we do such a thing? Well, hear me out.

First, we wanted to be sure that we were pushing the graphics cards as hard as possible, so they would live up to their potential. Using a top-end CPU ensures that the processor doesn’t become the performance constraint. Second, in reality, the gap between our QX9650 and the actual processors you may find in many enthusiast systems isn’t as great as you might think. We’ve easily taken a $120 Core 2 Duo E7200 to over 3GHz on a basic air cooler. Yes, we’re using a quad-core CPU, but having more than two cores doesn’t tend to make much difference in gaming performance just yet. Third, our two GPU test rigs are outfitted with identical QX9650 processors, and honestly, without matching processors, our testing time would have been quite a bit longer.

One area where having a high-end CPU could skew our results is video playback testing, where we look at CPU utilization while playing a Blu-ray disc. For those tests, we swapped in just about the slowest Core 2 processor we could find, a Core 2 Duo E4300 clocked at 1.8GHz.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme QX9650 3.0GHz
System bus 1333MHz (333MHz quad-pumped)
Motherboard Gigabyte GA-X38-DQ6
BIOS revision F9a
North bridge X38 MCH
South bridge ICH9R
Chipset drivers INF update 8.3.1.1009
Matrix Storage Manager 7.8
Memory size 2GB (4 DIMMs)
Memory type Corsair
TWIN2X40966400C4DHX
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4
RAS to CAS delay (tRCD) 4
RAS precharge (tRP) 4
Cycle time (tRAS) 12
Command rate 2T
Audio Integrated ICH9R/ALC889A
with RealTek 6.0.1.5618 drivers
Graphics
Radeon HD
4670 512MB GDDR3 PCIe

with Catalyst 
8.53-080805a-067874E-ATI drivers

Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.8 drivers
Asus Radeon HD 4850 512MB PCIe
with Catalyst
8.8 drivers
Zotac GeForce 9500 GT ZONE
512MB GDDR3 PCIe
with ForceWare 177.92 drivers
 

EVGA 
GeForce 9600 GSO 384MB PCIe

with ForceWare 177.92 drivers

BFG 
GeForce 9600 GT OCX 512MB PCIe

with ForceWare 177.92 drivers

Palit GeForce
9800 GT 1GB PCIe

with ForceWare 177.92 drivers
 

GeForce
9800 GTX+ 512MB PCIe

with ForceWare 177.92 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX March 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The theory—and practice
You’ve already heard me ramble on at length about texture filtering and memory bandwidth, and you may be wondering why. Well, specs are pretty important in graphics cards, even to this day. No, they’re not destiny—a more efficient architecture might outperform a less efficient one, even if the latter had higher peak performance in theory. In fact, it happens all the time. But constraints like memory bandwidth do tend to dictate relative performance, especially among similar products from the same GPU maker. Below, I’ve compiled some key numbers for the cards we’re testing and a few of the higher-end ones we’re not, so you can get a sense of the landscape. Please note that these numbers are based on the actual clock speeds of the cards we’re testing, not the “stock” clocks established by the GPU makers for each GPU type.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear
texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

GeForce 9500 GT

4.4 8.8 4.4 25.6

GeForce 9600 GSO

6.7 26.6 13.3 38.5

GeForce 9600 GT

11.6 23.2 11.6 62.2

GeForce 9800 GT

9.6 33.6 16.8 57.6
GeForce 9800 GTX+

11.8 47.2 23.6 70.4
GeForce 9800 GX2

19.2 76.8 38.4 128.0
GeForce GTX 260

16.1 36.9 18.4 111.9
GeForce GTX 280

19.3 48.2 24.1 141.7
Radeon HD 4650 4.8 19.2 9.6 16.0
Radeon HD 4670 6.0 24.0 12.0 32.0
Radeon HD 3850 11.6 11.6 11.6 57.6
Radeon HD 4850

10.0 25.0 12.5 63.6
Radeon HD 4870

12.0 30.0 15.0 115.2
Radeon HD 4870 X2

24.0 60.0 30.0 230.4

Notice the tremendous range we’re looking at between the cheapest video cards and the most expensive. The Radeon HD 4870 X2 has just shy of ten times the memory bandwidth of the GeForce 9500 GT—and we’re using the more expensive version of the 9500 GT with GDDR3. There is a real sense in which you get what you pay for when you buy a graphics card. The key specs do tend to track with price.

The Radeon HD 4670 is an important baseline for us because, at 80 bucks, it has easily more texture filtering capacity than the pricier Radeon HD 3850—and its new architecture is almost assuredly more efficient, too. Heck, the 4670 has nearly as much filtering capacity as the 4850. The 3850 and 4850, though, both have quite a bit more memory bandwidth, about twice as much. I’m intrigued to see whether the 4670 can overcome that deficit. If it can, at least in part, it will signal something important about the viability of low-end graphics cards.

The 4670’s would-be competition from Nvidia, the rebate-driven GeForce 9600 GSO, boasts slightly higher capacities in every category than the 4670. That will make for interesting times. Here’s what happens when we test these things with a directed benchmark.

3DMark’s color fill rate test measures the graphics card’s ability to draw pixels, essentially, and we’ve found that this test tends to be limited by memory bandwidth more than anything else. The cards are largely true to form here, with the exception that the Radeon HD 4850 edges ahead of the GeForce 9800 GTX+.

The texture fill test is arguably more important for performance in many games, and it’s typically less limited by memory bandwidth alone. In fact, the Radeon HD 4670 outperforms both the Radeon HD 3850 and the GeForce 9600 GT in this test, despite having less memory bandwidth. Still, the 4670 can’t keep pace with the 4850; the 4850 almost doubles the 4670’s texturing throughput, just as it has about double the memory bandwidth.

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

90 134

GeForce 9600 GSO

259 389

GeForce 9600 GT

237 355

GeForce 9800 GT

339 508
GeForce 9800 GTX+

470 705
GeForce 9800 GX2

768 1152
GeForce GTX 260

477 715
GeForce GTX 280

622 933
Radeon HD 4650

384
Radeon HD 4670

480
Radeon HD 3850 464
Radeon HD 4850

1000
Radeon HD 4870

1200
Radeon HD 4870 X2

2400

And no, the units in the texture fill rate test don’t seem to track with our expectations at all—they seem to be off by miles. I’ve contacted the folks at FutureMark about this problem repeatedly, and they’ve told me repeatedly that the people who might address it are on vacation. This has been going on since, oh, June-ish, so I suggest you apply for your job at FutureMark today. The benefits are excellent.

The table to the left shows the next piece of the GPU performance picture, and increasingly the most important one: shader processing power. We’ve split these theoretical peak numbers into two columns in order to allow room for a quirk of Nvidia’s shader processors: they can issue an additional multiply instruction in certain cases, raising their theoretical peak shader arithmetic capacity by a third. They can’t use this additional MUL in every situation, though, and old GeForces can’t use it as often as the newer GTX 200 series due to some architectural constraints.

That said, the Radeons have some constraints of their own, including the arguably more difficult instruction scheduling required by their five-ALU-wide superscalar execution units. So, of course, we’ll want to measure shader power with a few directed tests, as well.

To set the stage for that, note that the Radeon HD 4670 again is a potential overachiever. Its 480 GFLOPS of peak shader power match the single-issue numbers for the GeForce GTX 260, a much more expensive card. The GeForce 9600 GSO, the would-be competitor to the 4670, trails it by quite a bit, theoretically. However, the 4670’s relatively weak memory bandwidth could hold it back.

That appears to be just what happens. Despite having a little more theoretical shader prowess than the Radeon HD 3850, the 4670 trails the 3850 in each one of the shader tests. The GeForce 9600 GSO also has a leg up on the 4670 in each case.

All of this sets the stage nicely for what comes next, which is real game tests. Games will most likely care less about any single one of these performance factors. Instead, each one will stress its own distinct mix of them. The question is: how will our cheap graphics cards, with their obvious strengths and weaknesses, hold up overall?

Call of Duty 4: Modern Warfare
We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. We’ve chosen to test at display resolutions of 1280×1024, 1680×1050, and 1920×1200, which were the three most popular resolutions in our hardware survey.

We didn’t go easy on the cheap cards, either—we enabled image quality enhancements like antialiasing and anisotropic filtering where appropriate. As you’ll see, most of the cards handled them quite well. Because the cheapest cards can suffer quite a bit from having such things enabled, though, we did test at 1280×1024 with AA and sometimes aniso disabled, depending on the game, to coax frame rates well into playable territory on most cards.

As you can see, every single card tested except for the GeForce 9500 GT is able to crank out frames at a rate of over 60 per second in CoD4 at 1280×1024 with edge antialiasing disabled, and even the 9500 GT averages well above 30 FPS. That demonstrates why cheap video cards are somewhat interesting these days. As the display resolutions and image quality increase, the pack separates into several clear groups. The Radeon HD 4670, 3850, and the GeForce 9600 GSO bunch together, as do the Radeon HD 4850 and GeForce 9800 GTX+. The GeForce 9600 GT and 9800 GT form their own group between the other two, while the 9500 GT is alone at the back of the pack.

To give you a sense of what these numbers mean, I found the Radeon HD 4670 to be borderline playable at 1680×1050. You might be able to play through the single-player campaign at these settings, but you would probably want to dial back something—the resolution, AA, or aniso—in order to get it running quickly enough for multiplayer. Still, that’s quite good for an $80 video card.

Half-Life 2: Episode Two
We used a custom-recorded timedemo for this game, as well. We tested with most of Episode Two‘s in-game image quality options turned up, including HDR lighting. Reflections were set to “reflect world,” and motion blur was disabled.

The basic groupings we saw in Call of Duty 4 are also apparent here, but frame rates are higher overall. Despite having half the memory bandwidth, the Radeon HD 4670 matches the 3850 once again. And in Episode Two, in my experience, both the 4670 and the GeForce 9600 GSO are more than up to the task of delivering smooth gameplay at 1680×1050 with 4X AA and 16X aniso. In fact, they’re both pretty good at 1920×1200, as well.

Enemy Territory: Quake Wars
We tested this game with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” Shadows and smooth foliage were enabled, but soft particles were disabled. Again, we used a custom timedemo recorded for use in this review.

Our usual groupings are upset a little bit here by the relatively stronger performance of the Radeons. The 4670 opens up a big lead on the GeForce 9600 GSO, and the Radeon HD 4850 takes a commanding lead over the GeForce 9800 GTX+. The thing is, we’re again looking at quite acceptable frame rates at higher resolutions with some of the cheaper cards, including the 4670.

Crysis Warhead
Rather than use a timedemo, I tested Crysis Warhead by playing the game and using FRAPS to record frame rates. Because this way of doing things can introduce a lot of variation from one run to the next, I tested each card in five 60-second gameplay sessions. The benefit of testing in this way is that we get more info about exactly how the cards performed, including low frame rate numbers and frame-by-frame performance data. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

We used Warhead‘s “Mainstream” quality level for testing, which is the second option on a ladder that has four steps. The “Gamer” and “Enthusiast” settings are both higher quality levels.

As you may know, performance in the original Crysis was a bone of contention for many people, who thought the game ran too slowly. The folks at Crytek claim to have paid more attention to making sure Warhead runs well on most systems, and they’ve even introduced a Warhead-ready PC that costs just $700. That system is outfitted with a GeForce 9800 GT, and based on our tests, that’s a pretty good fit. The median low frame rate we encountered on the 9800 GT was 25 FPS, or about the speed of a Hollywood movie. Averages were well above that. While playing the game with these settings, the 9800 GT’s performance was very much acceptable.

In fact, most of the cards were able to handle Warhead reasonably well. The glaring exception in this case was the Radeon HD 4670, which just wasn’t up to the task. In the heat of the action, frame rates dipped into the teens, and performance felt sluggish. That may have been its relatively anemic memory bandwidth coming into play; the Radeon HD 3850 didn’t struggle nearly as badly.

Overall, the Nvidia-based cards fared better in this brand-new game, which seems to be typical these days. I wouldn’t be surprised to see AMD deliver a driver update in the coming weeks that substantially improves Radeon performance with Warhead, but more games come out of the box well optimized for GeForce cards.

Blu-ray HD video decoding and playback
One of the things that buying a new graphics card will get you that an older card or integrated graphics solution might not have is decode and playback acceleration for HD video, including Blu-ray discs. The latest GPUs include dedicated logic blocks that offload from the CPU much of the work of decoding the most common video compression formats. To test how well these cards perform that function, we used CyberLink’s new release 8 of PowerDVD, a Lite-ON BD-ROM drive, and the Blu-ray disc version of Live Free or Die Hard. Besides having the “I’m a Mac” guy in it, this movie is encoded in the AVC format (which includes H.264 video compression) at a 28Mbps bit rate.

Video decode acceleration is particularly important for the miserly among us, because budget CPUs sometimes have trouble playing back compressed HD video without a little bit of help. Most of today’s dual-core processors can still handle it, but they won’t have many extra cycles left over to do anything else. Compounding the problem, we’ve found in the past that low-end video cards sometimes run into bandwidth limitations when assisting with HD video decoding, playback, and image scaling.

In order to really stress these cards, we installed a lowly Core 2 Duo E4300 processor (a dual-core 1.8GHz CPU) in our test rig, and we asked the cards to scale up our 1080p movie to fit the native 2560×1600 resolution of our Dell 30″ monitor. We then recorded CPU utilization over a duration of 100 seconds while playing back chapter four of our movie.

Good news all around here. None of the video cards had any trouble playing our movie, with no apparent dropped frames or other playback glitches. As you can see, the Radeons tended to do a little bit better job of unburdening the CPU than the GeForces, but all of the cards proved to be more than capable—even the lowly GeForce 9500 GT.

We considered also testing image quality using the HD HQV benchmark, but we decided against it for a number of reasons. HQV tests the efficacy of post-processing algorithms like noise reduction and edge enhancement, but the reality is that on a properly mastered Blu-ray disc, you really won’t need such things. In fact, Nvidia’s drivers leave both noise reduction and edge enhance off by default. AMD’s do have noise reduction enabled, but not at a very aggressive setting. Nvidia suggests the reviewer should tune his video card’s noise reduction slider manually in order to achieve the best score in HQV, but I have a hard time imagining that many users would tune their video cards’ noise reduction algorithms on a per-video basis.

The big thing to take away from these tests is that even the least capable video card in the bunch is more than adequate at accelerating HD video playback.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Half-Life 2 Episode Two at 1680×1050 resolution, using the same settings we did for performance testing.

One nice benefit of a low-end graphics card is low power consumption, as illustrated by these results. The Radeon HD 4670’s power draw, both at idle and when running a game, is quite good, especially considering its performance. The GeForce 9600 GSO draws more power all around.

Noise levels
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

So what happened? Quite simply, most of the cards didn’t register above the ~40 dB volume threshold of our sound level meter. That’s good news overall, even though it means I need to buy a new sound level meter soon. Of course, the passively cooled Zotac card isn’t like to show up on any sound level meter, but the reality here is that many of these coolers, especially the custom ones on BFG Tech 9600 GT and the Palit 9800 GT, are exceptionally quiet. Some of the higher end cards are louder, especially the Radeon HD 4850 and the GeForce 9800 GTX+, because they generate more heat and must deal with it.

GPU temperatures
Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. These temperatures were recorded while running the “rthdribl” demo in a window.

You might have expected the Zotac 9500 GT with the passive cooler to run hot, but AMD’s latest Radeons do, too. That seems to be the result of a conscious decision on AMD’s part to tune its fan speed controllers to allow higher temperatures. The tradeoff here is that the Radeons are relatively quiet. Some folks have raised longevity concerns about video cards that run this hot, but AMD insists its cards can handle those temperatures. We’re still waiting to get our hands on a Radeon HD 4850 with a custom cooler that might produce both lower temperatures and noise levels than the stock one. Again, the custom coolers from BFG and Zotac are exemplary on both fronts.

I should address a common misconception while we’re talking about these things. The fact that a video card runs at higher temperatures doesn’t necessarily mean it will heat up the inside of your PC more than a cooler-running card. You’ll want to look at power consumption, not GPU temperatures, to get a sense of how much heat a video card produces. For example, the Radeon HD 4670 is among the hottest cards in terms of GPU temperatures, but it draws less power—and thus converts less of it into heat—than almost anything else we tested. Its higher temperatures are simply the result of the fan speed/temperature thresholds AMD has programmed into the card.

Conclusions
So can you get away with spending less than $100 on a video card? In certain circumstances, you bet. If you have a monitor that’s 1280×1024 or smaller, a very affordable graphics card like the $80 Radeon HD 4670 will allow you to play many of the latest games with ease. Even at 1680×1050, in fact, the Radeon HD 4650 and GeForce 9600 GSO can produce acceptable frame rates. You may have to compromise a bit, dialing back features like antialiasing or in-game image quality settings, in order to get acceptable performance in the most demanding of today’s games, but the compromises probably won’t be too terrible. That’s particularly true for the many games ported to the PC or co-developed for game consoles. The limits of the Xbox 360 and PlayStation 3 establish a baseline than even some of the cheapest PC graphics cards can meet.

Among those cheaper cards, the Radeon HD 4670 sets a new standard for price-performance ratio and all-around desirability. Compared to the would-be competition from Nvidia, the GeForce 9600 GSO, the 4670 has slightly higher overall performance, lower CPU utilization during Blu-ray playback, less need for clearance inside of a PC chassis, and lower power consumption. Thanks to this last trait, the 4670 doesn’t require a separate PCIe power lead, either, so it should slot right into granny’s Dell (or yours) with very little drama. And you don’t have to rely on a mail-in rebate in order to get the 4670 at its list price of $79.99, unlike the 9600 GSO.

Still, you’ve seen the numbers in the preceding pages. Make up your own mind, but personally, I can’t get past the value proposition of cards like the Radeon HD 4850 and the GeForce 9800 GTX+. Especially the 4850. New games are coming out all of the time, and many of them, like Crysis Warhead, will make a bargain-priced GPU beg for mercy. Reaching up into the 4850’s range ($140 after rebate, $170 before) will get you roughly twice the real-world GPU power of a Radeon HD 4670. That’s ample graphics power to turn up the eye candy in most games, even at 1920×1200, and some honest-to-goodness future-proofing, too. That’s also value even a cheapskate like me can appreciate.

Latest News

Joint International Police Operation Disrupts LabHost
News

Joint International Police Operation Disrupts LabHost – A Platform That Supported 2,000+ Cybercriminals

Apple Removes WhatsApp and Threads From App Store In China
News

Apple Removes WhatsApp and Threads from Its App Store in China

On Friday Apple announced that it’s removing WhatsApp and Threads from its App Store in China over security concerns from the government. Adding further, Apple said it’s only doing its...

XRP Falls to $0.3 Amid Massive Weekend Sell-off - Can $1 Be Achieved Post-Halving?
Crypto News

XRP Falls to $0.3 Amid Massive Weekend Sell-off – Can $1 Be Achieved Post-Halving?

The crypto market is sinking lower, moving away from its impressive Q1 peak of $2.86 trillion. Major altcoins like Ethereum have not been spared either, with investors facing losses from the...

Cardano Could Rally to $27 After Bitcoin Halving if Historical Performance
Crypto News

Cardano Could Rally to $27 After Bitcoin Halving Following a Historical Performance

Japanese Banking Firm Launches Passive Income Program for Shiba Inu
Crypto News

Japanese Banking Firm Launches Passive Income Program for Shiba Inu

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors
Crypto News

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors

Cisco Launches AI-Driven Security Solution 'Hypershield'
News

Cisco Launches AI-Driven Security Solution ‘Hypershield’