Home AMD’s Radeon HD 5830 graphics card
Reviews

AMD’s Radeon HD 5830 graphics card

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

AMD’s line of DirectX 11 graphics cards has been fleshed out rather nicely since its inception late last year, with products spanning from well under a hundred bucks to somewhere north of $600. Yet that product line has always had a metaphorical gaping hole right in the center, between the $170-ish Radeon HD 5770 and the $300-ish Radeon HD 5850. That’s huge. You could drive a metaphorical truck through it, metaphorically speaking. And a great many PC gamers like to buy their graphics cards precisely within that soft spot between 200 and 250 bucks, because a nice mix of price and performance traditionally can be had there.

At long last, AMD is endeavoring to address that vibrant segment of the market with its latest Radeon, the HD 5830. That’s probably a good thing, too, since Nvidia’s DX11 GPUs are later than closing time at Taco Bell—and, unlike Taco Bell, not really poised to address the more value-conscious segments of the market.

Now that we’ve successfully compared a GPU to an Enchirito, my work here is nearly done. All that remains is to evaluate the Radeon HD 5830’s performance and value proposition versus the other offerings on the market—and, for potential upgraders, against some rather older graphics cards from the same price range.

Less stuff, more speed, say what?
As long as you’ve read our Radeon HD 5870 review, you can know everything you need to know about the Radeon HD 5830 in a few short paragraphs. The 5830 is based on the same “Cypress” GPU as the 5870, but it’s had various internal bits and pieces deactivated in the name of product segmentation. Although that very fact itself may seem tragic, GPU makers have long engaged in these practices, in part because they can manage to make use of quite a few chips that may not entirely work perfectly. Heck, the Radeon HD 5850 has already followed that path, and the 5830 comes behind it.

The 5830 is a little strange, though. Not only have six of Cypress’s 20 SIMD cores been disabled—leaving it with 1120 ALUs (or stream processors, as AMD calls them) and 56 texels per clock of filtering capacity—but fully half of the render back-ends or ROPs have been nixed, as well. That means the 5830 has considerably less pixel throughput and antialiasing power than its elder siblings. Yet AMD has left its four 64-bit memory interfaces entirely intact, so that its 1GB of 4Gbps GDDR5 memory yields just as much memory bandwidth as the Radeon HD 5850. Freaky! That makes for a very different balance of resources than other Cypress-based cards. The evil geniuses in AMD product planning have somewhat compensated for this mass deactivation of ROPs by giving the 5830 an 800MHz core clock speed—that’s 75MHz higher than the 5850, believe it or not.

Pixel
fill rate
(Gpixels/s)

Texel
filtering
rate
(Gtexels/s)
Memory
bandwidth
(GB/s)
Shader
arithmetic
(GFLOPS)
Radeon HD 5770

13.6 34.0 76.8 1360
Radeon HD 5830

12.8 44.8 128.0 1792
Radeon HD 5850

23.2 52.2 128.0 2088
Radeon HD 5870

27.2 68.0 153.6 2720

The math pretty much works out in the end, though, with the 5830 nestled in between the 5770 and 5850 in key rates for texture filtering and shader arithmetic. So long as its relatively low ROP throughput balances out its beefy memory bandwidth, the 5830 shouldn’t cause too many fights at the dinner table.

After deliberations that apparently continued right up until the eve of the 5830’s introduction, AMD has chosen to set the 5830’s suggested e-tail price almost equidistant between the 5770 and 5850 at $240, or in gas-station format, $239.9999.

We expect unusual variety from the Radeon HD 5830 on a number of fronts, in part because AMD hasn’t produced a reference design for this product. Instead, it has suggested board makers might want to base their cards on the Radeon HD 5870 board design. Board vendors are free to do otherwise, though, and they’ll have to come up with their own custom cooling solutions.

XFX’s version of the 5830 will be based on the same relatively compact board design as the firm’s 5850 card, with an angular custom cooler apparently intended to confuse radar systems. XFX expects these cards to hit online retailers later this week at a price around the $239 mark, bundled with a copy of Aliens vs. Predator, a new game fortified with DirectX 11 effects.

We don’t have too much information yet about Gigabyte’s offering, but it will apparently feature twin props for extra thrust, along with a much larger heatsink and longer PCB than the XFX card. Gigabyte looks to have used the 5870 reference board as its template, at least for this first attempt. The result appears to be a much longer card than even the standard Radeon HD 5850.

Sapphire has followed a similar path with its 5830, although it has stuck with a single, enormous fan for the cooler. This card is already listed at Newegg for $239.99, although it’s not currently in stock. Another version bundled with Modern Warfare 2 is apparently available now for $264.99. You can probably expect to see MW2 bundled with 5830 cards from a number of AMD’s partners, although we were kind of expecting to see cards priced right at $240 to include MW2. That $25 premium makes the game bundle less enticing.

I have to say that, at first glance, the selection of custom coolers above is a little bit disappointing. The reference coolers from both AMD and Nvidia these days use a blower situated at the end of the card. The blower pulls air in from the system, pushes it through a shroud across the heatsink and the GPU, and exhausts the heated air out the back of the case. This arrangement tends to work very well, even in cramped quarters and multi-GPU configurations. The cards pictured above might have spectacular thermals, amazingly low noise levels, and excellent adaptability—we don’t know, since we haven’t tested them yet. But we’ve had problems with similar coolers in the past, especially in multi-GPU configurations. Given the choice, I’d prefer a proper blower-and-shroud combo any day, especially since that sounds kinda racy.

Heck, we wound up with just such a combo, since the 5830 card we received from AMD for testing looks to be a Radeon HD 5870 with the appropriate clock speeds set and bits disabled. This card should do a fine job of representing the 5830’s performance, but noise levels, GPU temperatures, and even power consumption may vary on the actual products. That hasn’t stopped AMD from offering power consumption estimates of 125W at peak and 25W when idle, though.

Eyefinity to the sixth: Coming soon
There’s one more member of the Radeon HD 5800 family slated for release soon: the Radeon HD 5870 Eyefinity6 edition, the specialized card code-named “Trillian” that will feature six display outputs. This isn’t just a display wall setup intended for department stores, either. True to its name, the Eyefinity6 will allow for multi-monitor gaming across six displays, amazingly enough.


An early Trillian card. Can you say “connector density?”

This card will differ from the regular 5870 in several respects, the most obvious being the ominous array of six Mini DisplayPort connectors poking out of the expansion slot cover. Unlike the regular 5870, the Eyefinity6 card will require an eight-pin auxiliary power input in addition to a six-pin one, because it will draw more power when driving six monitors simultaneously. Another adjustment is the addition of a second gigabyte of memory on the board. Since six 30″ displays total approximately 24 million pixels, the additional on-board RAM will likely be needed, he said in a breathtaking understatement.


Even a six-megapixel config benefits from 2GB of RAM, according to AMD.

The key to attaching more than two displays to today’s Radeons is DisplayPort, so that will be the Trillian card’s preferred connection type. In order to accommodate up to two older monitors, though, the card will ship with a grand total of five adapters: two passive Mini-DP-to-DVI adapters, one passive Mini-DP-to-HDMI adapter, and two Mini DisplayPort-to-DisplayPort plug converter. That should cover most eventualities, although I could see a lot of folks needing more Mini-DP-to-DP adapters.

A drool bucket, unfortunately, is not included, so you’ll have to make your own arrangements there.


Dead Space at over 24 million pixels

We don’t have exact pricing yet, but AMD expects the 5870 Eyefinity6 edition to ring up at somewhere between $400 and $500.

We had initially expected Trillian cards to be available way back in late September, not long after the 5870’s launch. A couple of things conspired against Trillian’s timely introduction, including supply problems with Cypress GPUs and, especially, an OS compatibility snag. Although AMD was able to present six displays to Windows Vista as a single, large surface ready for use in 3D accelerated games, making that happen in Windows 7 involved an additional technical hurdle. Win7 would allow for up to four 3D-accelerated displays, but not six. Implementing the software changes to work around this OS limitation took some time, and AMD elected to hold off on introducing the Eyefinity6 product until that problem was solved.

Happily, Trillian setups should now benefit from some of the major feature improvements in AMD’s newer Catalyst drivers, including the ability to define display groups, easier switching between different multi-monitor configs, and —thank goodness!—bezel compensation for Eyefinity displays.

One of those improvements is the ability to combine CrossFire multi-GPU setups with multi-monitor Eyefinity display surfaces. The appeal here is obvious, since pushing 24 megapixels with a single 5870 GPU is possible and sometimes quite workable, but not for every game. Generating that many pixels at the right quality levels would tax any single graphics chip. Making CrossFire work on this scale presents some challenges, however, as AMD readily admits. The core issue is the fact that the dedicated CrossFire interconnect used for passing completed frames between cards has “only” enough bandwidth to sustain a 2560×1600 display resolution. Even three 1080p displays will exceed its capacity. The alternative is to transfer frame buffer data via PCI Express, which is what AMD does when necessary. Using PCIe works, but it can limit performance scaling somewhat—don’t expect to see the near-linear scaling one might get out of a dual-card setup in the right game with a single display. That’s not to say mixing CrossFire with Eyefinity won’t be worth doing. Based on AMD’s performance estimates, frame rates could improve between about 25% and 75% when adding a second GPU with a 12-megapixel, six-monitor array.

In fact, this is something we’d like to test soon. You know, for science.

One might be tempted to pair up a regular Radeon HD 5870 with an Eyefinity6 version for maximum performance, but remember that each card in a CrossFire group takes on the attributes of the lowest-spec card in the bunch. In this case, pairing a Radeon HD 5870 1GB with a 2GB Eyefinity6 card would reduce the effective memory size of each card to 1GB—one step forward, two steps back, most likely. Just something to keep in mind if you’re looking to build such a setup.

Another possibility that would seem to make a certain amount of sense would be a dual-GPU Radeon HD 5970 board with six display outputs and, say, 4GB of memory onboard (2GB per GPU). AMD wasn’t ready to announce such a beast when we inquired about this possibility, but we were told to “watch this space.” The firm says it won’t prevent add-in board makers from concocting such a monster, so we could well see something along those rather intimidating lines on the market before long.

Grandpa laces up his skates
Our recent foray into providing some broader historical context for our hardware reviews proved sufficiently popular that we thought we’d try it again, this time for graphics cards. To make that happen, we had to dial back the number of games we tested and focus instead on a breadth of cards—all within the context of a very limited amount of time for testing.

At any rate, we have chosen to test a number of current graphics cards from just above, just below, and somewhat near the 5830’s price. We’ve also selected some older cards from about the same price class dating back over a number of years. Here are our key match-ups to watch:

  • Radeon HD 5830 vs. erm… — Finding a direct competitor to the Radeon HD 5830 isn’t easy. The natural candidate would have been the GeForce GTX 275, but those have essentially evaporated from e-tail listings. For the time being, they seem to have been replaced by higher-clocked variants of the GeForce GTX 260, which offer essentially the same mix of price and performance. We have chosen one such card—an Asus with 216 SPs, a 650MHz core clock, 1.4GHz shaders, and 2.3GHz memory—to represent the GTX 260 in our testing. Cards like this one will set you back about $220. Lower-clocked GTX 260 variants may cost you less, but they are all in alarmingly short supply these days.

    Although the 5830 replaces the Radeon HD 4830 in spirit, it’s more of a direct successor, price-wise, to the Radeon HD 4890. Due to limited testing time and our desire to situate the 5830 among its Radeon HD 5000-series brethren, we didn’t include a 4890 in our tests. An outrage, I know! But we did include the Radeon HD 4870, simply because we figure more folks own them and might be considering an upgrade. The 4890 should typically be about 10-15% faster than the 4870, for comparison.

    In a sense, the 5830’s most notable competition may be the 5770 and 5850, since they’re also DX11 products with potentially more appealing value propositions.

  • Radeon HD 4850 vs. GeForce 9800 GTX — We decided to go historical on this one a little bit, so rather than comparing the more current offerings based on these GPUs, the GeForce GTS 250 and the Radeon HD 4850 1GB, we reached back into the parts bin for the original items. Our Radeon HD 4850 512MB is an Asus model from the first wave back in June of ’08.

    At the time of the 4850’s launch, the incumbent offering from Nvidia was the GeForce 9800 GTX, dating back to April of 2008. Nvidia quickly countered the 4850 with higher-clocked variants of the 9800 GTX, including the 9800 GTX+ based on a newer 55-nm GPU. We probably should have tested that one, but when I reached into the Damage Labs parts bin, for some reason, I pulled out an original XFX 9800 GTX with default clocks of 675MHz (core), 1688MHz (shaders) and 2200MHz (memory), well below the 738/1836/2200MHz frequencies of the 9800 GTX+. Truth be told, GeForce 9800 GTX cards with their original clock speeds kept selling for quite a while after the introduction of the Radeon HD 4850 before the 9800 GTX+ overtook them, but I’m still kicking myself over this selection.


From left to right: Radeon HD 5770, Radeon HD 4850, and Radeon HD 3870
  • Radeon HD 3870 vs. GeForce 8800 GT — Now we’re getting a little older school. The GeForce 8800 GT ruled the $199-249 price range for quite a while, starting with its October 2007 unveiling. The 8800 GT’s price-performance ratio was a revelation at the time, and when AMD pulled back the curtain on the Radeon HD 3870 the following month, it struggled to keep pace. Nevertheless, both were good values at the right price, and they were both quite popular, as was the Radeon HD 3850, a close relative of the 3870.

    We’ve chosen a couple of representatives left over from our massive comparo of mid-range graphics cards back in early 2008. Asus’s EN8800GT TOP has clocks of 700/1750/2020MHz, north of the GeForce 8800 GT’s base frequencies of 600/1500/1800MHz, which puts its performance alarmingly close to our GeForce 9800 GTX, as you’ll see. We gave it a TR Recommended award in that comparo. Asus represents the other team, too. The EAH3870 TOP’s core and memory clocks of 850MHz and 2.25GHz, respectively, are well above the Radeon HD 3870’s 775MHz/2250MHz stock speeds.


From left to right: GeForce 8800 GT, GeForce 7900 GTX, GeForce 7900 GS
  • GeForce 7900 GS & GTX vs. the whippersnappers — Back in the fall of 2006, the new hotness at $199 was the GeForce 7900 GS. We reviewed it, liked it, and told folks “the GeForce 7900 GS stands alone as the best value in graphics.” The 7900 GS delivered performance levels closely comparable to its more expensive predecessors, including the GeForce 7800 GTX and the GeForce 7900 GT, for hundreds less. ATI soon countered with the Radeon X1950 Pro, and competitive balance was restored.

    Since we had quite a few more Radeons than GeForces to manage from newer generations, I decided to test older cards from the Nvidia camp. The 7900 GS seemed like a perfect candidate, but as we got into the test results, I realized something: with only 256MB, the 7900 GS was likely to be performance limited by its video memory size. To rectify that situation, I pulled out an old GeForce 7900 GTX 512MB, from May of ’06, and put it through the paces, too. So we’ve included not one, but two four-year-old video cards in the bunch.

Test notes
Our GPU test rigs have been pretty much the same for a while now, and since they’re based on Core i7-965 Extreme CPUs and Gigabyte X58s mobo, we see little need to upgrade the core components. However, Corsair recently hooked us up with some new memory kits for these systems that take us all the way to 12GB in six DIMMs. Like so:

These are high-grade Dominator DIMMs, and we expect the extra RAM could be helpful when we start installing multiple 2GB video cards in these systems and driving three or more displays. Although these Dominators come with auxiliary DIMM cooling fans, we found that our open-air test rigs were perfectly stable without them, impressively enough. Our thanks to Corsair for providing the memory.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8
DDR3 SDRAM
at 1333MHz
Memory
timings
8-8-8-24-2T
Chipset drivers INF update 9.1.1.1015
Matrix Storage Manager 8.9.0.1023
Audio Integrated ICH10R/ALC889A
with Realtek 6.0.1.5919 drivers
Graphics Asus
EAH3870 TOP Radeon HD 3870 512MB
with Catalyst 8.703-100210a-095560E drivers
Asus
EAH4850 Radeon HD 4850 512MB
with Catalyst 8.703-100210a-095560E drivers
Diamond
Radeon HD 4870 1GB
with Catalyst 8.703-100210a-095560E drivers
Gigabyte
Radeon HD 5770 1GB
with Catalyst 8.703-100210a-095560E drivers
Radeon HD
5830 1GB
with Catalyst 8.703-100210a-095560E drivers
Radeon HD
5850 1GB
with Catalyst 8.703-100210a-095560E drivers
Asus
EAH5870 Radeon HD 5870 1GB
with Catalyst 8.703-100210a-095560E drivers
XFX GeForce
7900 GS 480M 256MB
with ForceWare 196.34 beta drivers
GeForce 7900
GTX 512MB
with ForceWare 196.34 beta drivers
Asus
EN8800GT TOP GeForce 8800 GT 256MB
with ForceWare 196.34 beta drivers
XFX GeForce
9800 GTX 675M 512MB
with ForceWare 196.34 beta drivers
Asus
ENGTX260 TOP GeForce GTX 260 896MB
with ForceWare 196.34 beta drivers
Asus
ENGTX285 TOP GeForce GTX 285 1GB
with ForceWare 196.34 beta drivers
Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Diamond, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers
We’ve already looked at the theoretical peak numbers for the Radeon HD 5830 compared to its closest relatives, but the table below will put it into a bit broader perspective.

Peak
pixel
fill rate
(Gpixels/s)


Peak bilinear
INT8 texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce 7900 GS

7.7 9.6 44.8
GeForce 7900 GTX

10.4 15.6 51.2
GeForce 8800 GT

11.2 39.2 19.6 64.6 392 588
GeForce 9800 GTX

10.8 43.2 21.6 70.4 432 648
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce GTX 260 (216 SPs)

18.2 46.8 23.4 128.8 605 907
GeForce GTX 275

17.7 50.6 25.3 127.0 674 1011
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
Radeon HD 3870

13.6 13.6 13.6 73.2 544
Radeon HD 4850

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

14.4 36.0 18.0 124.8 1440
Radeon HD 5750

11.2 25.2 12.6 73.6 1008
Radeon HD 5770

13.6 34.0 17.0 76.8 1360
Radeon HD 5830

12.8 44.8 22.4 128.0 1792
Radeon HD 5850

23.2 52.2 26.1 128.0 2088
Radeon HD 5870

27.2 68.0 34.0 153.6 2720

These theoretical capacities don’t correspond directly to performance, of course. Much depends on the quirks of the GPU architectures and their implementations. We can measure some of these things with directed tests, though, to give us a sense of how the cards compare. Sadly, we’ve not been able to include the older, DirectX 9-only graphics cards in these tests, because 3DMark Vantage requires DirectX 10.

I’ve only included partial information in the table above for the two GeForce 7-series cards, in part because of some limitations of these older architectures. For example, the G71 GPU could filter FP16 texture formats, but it couldn’t do so in conjunction with multisampled antialiasing. Counting FLOPS on a non-unified shader design is also a little tricky, so I’ve abstained. Nonetheless, progress in the past four years has been substantial. The Radeon HD 5830 has 4.6 times the texture filtering capacity and 2.8 times the memory bandwidth of the GeForce 7900 GS. Similarly, the Radeon HD 5870 has 4.4 times the filtering rate and triple the memory bandwidth of the GeForce 7900 GTX.

We’ve often thought that GPU performance in 3DMark’s color fill rate test seems to be limited primarily by memory bandwidth. Notice how much faster the Radeon HD 4870 is than the Radeon HD 5770, for instance. The 5770 has a slightly higher theoretical peak fill rate, but the 4870 has nearly twice the memory bandwidth and proves markedly faster in this directed test.

The 5830, however, breaks that trend by delivering much a lower measured fill rate than the 5850, though their memory bandwidth on paper is identical. Heck, the 4870 outscores the 5830, too, even though it has slightly less theoretical peak fill rate and memory bandwidth. Something about the way AMD pruned back the Cypress GPU’s render back-ends produces unexpectedly poor results in this test.

3DMark Vantage was released in April, 2008, and only in the past few weeks has FutureMark fixed the units output by its texture fill rate test. I’m not sure why this obvious bug, about which we exchanged e-mails with FutureMark several times back in ’08, took so long to squish. At least we now have our first set of 3DMark texturing results that make intuitive sense.

Those results show us something we’ve long known: that AMD’s recent GPUs score much better than Nvidia’s in this benchmark. Being able to put units to them, though, gives us some additional insight. Notice how the Radeons reach very close to their theoretical peaks for INT8 filtering, while the GeForces are just as close to their half-rate FP16 peaks. We’ve long thought this was a test of FP16 texture filtering rate. What’s going on here?

When we asked Nvidia to explain why its GPUs were only reaching about half of their potential, we received an interesting answer. Turns out, Nvidia told us, that this test does indeed use FP16 texture formats, but it doesn’t filter the textures, even bilinearly. It’s just point sampled, believe it or not. The newer Radeons, it seems, can point-sample FP16 textures at their full rate, even though they can’t filter them at that rate. Nvidia’s GT200 samples FP16 textures at half of the INT8 rate, hence the disparity. Interestingly, Nvidia says the upcoming GF100 can sample FP16 textures at full speed, so it should perform better in this test, once it arrives. Trouble is, we’d really rather be measuring the texture filtering rates, which matter more for games, than the raw texture sampling rates of these GPUs.

For what it’s worth, the Radeon HD 5830 does sample FP16 textures at a much higher rate than the Radeon HD 4870 or 5770. In theory, it should be able to filter them faster, as well.

Performance on these shader power benchmarks tends to vary quite a bit from one GPU architecture to the next. As a result, the 5830 exchanges victories with its closest rival, the GeForce GTX 260, from one test to the next. Meanwhile, the 5770 and 5850 tend to bracket the 5830 exactly as one would expect. The more interesting result may be the fact that the 5830 is between two and four times the speed of the Radeon HD 3870.

DiRT 2
This excellent new racer packs a nicely scriptable performance test. We tested at the game’s “high” quality presets with 4X antialiasing in both DirectX 9 and DirectX 11 modes (DiRT 2 appears to lacks a DX10 mode). For our DirectX 11 testing, we enabled tessellation on the crowd and water. Because this automated test uses computer A.I. and involves some variance, we tested five times at each resolution and have reported the median results.

This is one very good-looking game, but astoundingly, even the Radeon HD 3870 is able to play it pretty fluidly—minimum frame rate: 29 FPS—at 1920×1080. Everything faster is more than capable, including the 5830.

The match-up between the GeForce GTX 260 and the new Radeon comes down to performance scaling at different resolutions. The 5830 is faster at lower resolutions, but the GTX 260 is increasingly more competitive as the demands on the GPU grow. At 2560×1600, the 5830 trails, though by a trivial margin of a few frames per second.

The GeForce 7900 cards are nothing if not consistent. The answer appears to be: 13 FPS. No matter what. Although the 7900 GS runs out of video memory at 2560×1600 and can’t start the game. I did test the GeForce 7 cards at 1366×768, as well, and guess what? 13 FPS for the 7900 GS and 14-15 FPS for the 7900 GTX.

DiRT 2‘s extra DirectX 11 effects don’t change the look of the game too terribly much, but they do tax the GPUs quite a bit more. Here, the gap between the Radeon HD 5830 and 5770 is negligible, while the 5850 is about 10 FPS faster at each res.

Borderlands
We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

Embarrassingly, the 5830 is barely any faster than the 4870 in Borderlands—and in this case, we can’t blame it on the reduction in anti-aliasing power caused by the 5830’s ROP-ectomy. We’re not even using AA. The GeForce GTX 260 is substantially faster in this game, and overall, the Nvidia cards tend to have higher minimum frame rates than the Radeons.

Even without antialiasing, Borderlands is off the menu for the GeForce 7 cards. We could probably scale back the resolution and image quality quite a bit and get this game to run acceptably, but the settings we’re using just overwhelm their abilities. Meanwhile, the GeForce 8800 GT looks to have been a much wiser choice than the Radeon HD 3870 in the light of this contemporary game. The 3870’s average frame rate at 1680×1050 of 28 FPS matches 8800 GT’s minimum frame rate. In fact, even at 1920×1080, you may not need to upgrade from an 8800 GT, given the frame rates it’s producing.

Left 4 Dead 2
In Left 4 Dead 2, we got our data by recording and playing back a custom timedemo comprised of several minutes of gameplay.

Here’s another example of the 5830 barely outperforming the Radeon HD 4870, which isn’t really a good omen for a $240 graphics card. Once again, the GTX 260 is faster at the highest resolution, too.

On the historical front, the GeForce 8800 GT continues to spank the Radeon HD 3870 in newer games. The 3870 can’t really handle this game at these quality levels. And yeah, the GeForce 7-series cards are overmatched yet again. Purely in terms of frame rates, at 1920×1080, the 5830 is seven times the speed of the GeForce 7900 GS. Yeah, it might be time to upgrade.

Call of Duty: Modern Warfare 2
Modern Warfare 2 generally runs pretty well on most modern PC hardware, but it does have some parts where lots of activity and heavy use of shader effects can slow it down. We chose to test performance in one such area, where you’re in a firefight inside of an office building. This close-quarters fight involves lots of flying debris, smoke, and a whole mess of enemy soldiers cooped up with your squad in close proximity.

To test, we played through this scene for 60 seconds while recording frame rates with FRAPS. This firefight is chaotic enough that there’s really no hope of playing through it exactly the same way each time, although we did try the best we could. We conducted five playthroughs on each card and then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2’s image quality settings maxed out, with 4X antialiasing enabled, as well. We could only include a subset of the cards at this resolution, since the slower ones couldn’t produce playable frame rates.

At last, the 5830 acquits itself reasonably well in one of our gaming tests. The GTX 260 is a tad quicker, but the two cards offer essentially equivalent performance, and the 5830 clearly outperforms the 5770 and 4870.

Power consumption
Because we don’t yet have our hands on a production version of the Radeon HD 5830, and because our testing time was limited, we’ve truncated the last few bits of our usual test suite. We’ve deferred GPU temperature and graphics card noise measurements until we have a production card, and I chose to draw on the results from our Radeon HD 5700 series review to give you a sense of the 5830’s relative power consumption. Although we used older drivers for most of the cards in that review, we don’t expect that to affect power consumption dramatically. Only the results from the 5830 are new here, and yes, we popped out three of the DIMMs so our test rig’s RAM config matched the one from our 5700 review.

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution.

These results largely fit our expectations for a Cypress-based graphics card. Either a modification to the 5830 reference card or some change to our graphics test config (such as newer drivers) has allowed the 5830-based system to shed six watts at idle versus the Radeon HD 5870. I’m not at all surprised to see the 5830 drawing a little more power under load than the 5850. Although the 5830 has more units disabled, its higher clock speed probably requires higher voltage flowing to the chip, and voltage is the single biggest determinant of power draw. Regardless, the 5830’s power consumption is quite reasonable and manageable.

The value proposition
Before we conclude, we can take a quick analytical look at the Radeon HD 5830’s value proposition. We’ll be taking the same basic approach we have in our past looks at GPU value, but we’ll be basing our results on current pricing and performance.

Per our custom, we’ve averaged pricing for the graphics cards from Newegg, paying careful attention to the higher prices often associated with higher-clocked variants of specific cards, like our GeForce GTX 260. We did our best to sample prices appropriate to the cards as tested. The exception, of course, is the Radeon HD 5830, where we’ve used AMD’s suggested e-tail price.

For performance, we used the average of the frame rates across the four games we tested. In cases where we tested at multiple resolutions, we focused on the highest resolution, since that’s where performance is most strictly GPU-constrained. For the math geeks among us, we considered using a harmonic mean of the frame rates, but we decided against it. We’re aware of the conventional wisdom on this point and are considering how to treat this issue in these little value exercises going forward.

Mashing up price and performance together allows us to produce a simple scatter plot where the best values will tend to be closer to the top left corner of the plot area.

The Radeon HD 5830’s combination of a $240 price tag and performance that’s not much better than a Radeon HD 4870 doesn’t add up to a tremendous GPU-buying value, from a sheer price-performance perspective. If you’re not concerned about power consumption and a DirectX 11 feature set, you’re easily better off with a GeForce GTX 260.

Conclusions
The case against the Radeon HD 5830 was made quite clearly in the value scatter plot on the preceding page. This graphics card’s price-performance proposition just isn’t terribly attractive. That matters a lot in a product like this one, which is essentially a negotiation between the GPU maker and the consumer: we’ll cut the price this much, the performance that much, and then see what you think. The fact that you’re still better off on an FPS-per-dollar basis with a GeForce GTX 260 than a Radeon HD 5830 is a rather disappointing development in a market where we’re used to seeing practically uninterrupted progress.

The case in favor of the Radeon HD 5830 demands our consideration, as well, and is surprisingly multi-faceted. The Radeon HD 5000 series has many benefits, including the highest quality texture filtering and the fastest, best antialiasing capabilities on the market. DirectX 11 is still a bit of an unknown, but game developers do seem to be adopting it. Compared to an older card like the GTX 260, the 5830 may offer higher performance, superior visual effects, or some combination of the two in upcoming games. Beyond that, the 5830 has the ability to drive three monitors at once and play games on them, thanks to AMD’s Eyefinity feature. And we can’t forget that the peak power draw of our 5830-based system was about 25W lower than our GTX 260-based one.

Potential buyers will also have to consider the value of the Modern Warfare 2 game bundle that AMD and some of its board partners are offering. That’s a $60 game, after all, and it should be packaged with many of these cards, as will other games in some cases. We’re not always big on game bundles, but MW2 is a heckuva lot of fun. Of course, if you already own it like many of us do, that won’t matter much to you, I suppose. And if card makers really charge an extra $25 for the MW2 bundle, we’ll be less than compelled.

One question we can’t yet answer is whether the 5830’s lower power draw will translate into lower noise levels than your typical GeForce GTX 260 or Radeon HD 4890. That will depend, of course, on the coolers selected by the board guys, just as the game bundles do. As a result, the ultimate value proposition of a given 5830 card isn’t something we can precisely gauge.

Overall, though, here’s what I think. The Radeon HD 5830 fills a gap in AMD’s lineup that desperately needed filling. The fact that AMD decided to address that need in this exact way, however, is ultimately disappointing. Given the current shortage of viable alternatives and the 5830’s richer feature set, we may end up tentatively recommending the 5830 as the card to choose in this price range, but we can’t do so with any great enthusiasm. Perhaps once the final products have reached the market, we’ll have some more positive indications. The 5830 had a chance to win our unqualified recommendation, though, and it simply hasn’t done so.