Home AMD’s Radeon HD 5670 graphics card
Reviews

AMD’s Radeon HD 5670 graphics card

Cyril Kowaliski
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

With its DirectX 11 graphics cards now populating the middle and upper echelons of the market—and last year’s supply issues largely behind it—AMD is now proceeding into the low-end arena. Today, we’re getting to see the first product of that expansion: the Radeon HD 5670, which should start cropping up in e-tail stocks soon for about $99.

To keep the price tag in the double digits, AMD has taken the world’s smallest scalpel and carefully sliced away some of the pixel-crunching resources from the bigger and more powerful GPUs in Radeon HD 5700, 5800, and 5900-series cards. That means the newcomer follows a similar recipe to the rest of its brethren, delivering DirectX 11 support and multi-display capabilities in a freshly minted 40-nm chip. You’re just getting it in a smaller, fun-sized portion.

Is fun-sized still tasty, or has AMD removed too much of the icing compared to other DX11 Radeons? That’s what we’re about to find out.

The ents are going to war!
AMD has nicknamed the Radeon HD 5670’s GPU “Redwood,” keeping with the coniferous naming scheme of its DirectX 11 Evergreen family. Under the hood (or bark, rather), Redwood contains half the execution resources of Juniper, the GPU in Radeon HD 5700-series cards, with the same 128-bit memory interface. Seeing as Juniper itself has half the execution resources and half the memory interface width of the Cypress chip that powers 5800- and 5900-series offerings, one could say Redwood is a quarter Cypress with a double-wide memory interface. But that’d be oversimplifying things just a tad. Here’s a human-readable overview of Redwood’s guts:

A block diagram of Redwood. Source: AMD.

Let’s see… We have five SIMD units, each containing 80 ALUs and tied to one texture unit capable of sampling and filtering four texels per clock. That gives us 400 ALUs (or stream processors) and 20 texels/clock, down from 800 SPs and 40 texels/clock on the Radeon HD 5770. AMD has also removed two of the ROP units, leaving Redwood capable of cranking out eight pixels each clock cycle—half as many as the 5770. Again, though, both the 5670 and its bigger brother have the same memory interface setup: 128 bits wide and compatible with 4Gbps GDDR5 memory.

All of this hedge trimming has left AMD with a small (we’ll look at die sizes soon), cheap-to-manufacture GPU that’s also quite power-efficient. According to AMD, the 5670 will draw just 14W at idle and 61W under a typical load. That means no PCI Express power connectors and hopefully low noise levels, despite the spartan-looking single-slot cooler.

Otherwise, as far as we can tell, Redwood has the same DirectX 11 capabilities and hardware features as Juniper. AMD advertises Eyefinity support, too, but the maximum number of supported displays is limited to four, and the reference card will only connect up to three displays.

The Radeon HD 5670 is but the first member of a whole budget DX11 lineup from AMD. For users with really tight budgets, the company plans to follow up in February with the Radeon HD 5500 and 5400 series. The former will cram Redwood chips and 128-bit memory interfaces into sub-50W thermal envelopes, while the 5400 series will be based on the Cedar GPU, whose memory interface is only 64 bits wide. AMD tells us the Radeon HD 5450 will have a low-profile form factor and enough brawn to run Tom Clancy’s H.A.W.X. on a three-display setup.

Considering the recent supply problems AMD has faced because of poor 40-nm yields at TSMC, one might be rightfully concerned about the availability of these products. We brought the subject up with AMD, which replied that it expects “multiple tens of thousands” of Radeon HD 5670 units to be available for the launch, followed by “similar quantities every week.” New 40-nm Radeons are seeing massive demand, AMD added, but 40-nm supply is now meeting the company’s expectations.

On the other side of the fence
Nvidia still doesn’t have any DirectX 11 products out, so if you want to get technical, the Radeon HD 5670 lacks direct competition for the time being. Out in the real world, though, Nvidia currently has two products buzzing close to the same $99 price point.

From a hardware standpoint, the GeForce GT 240 looks like the most direct competitor: it has a newish 40-nm graphics processor, GDDR5 memory, and DirectX 10.1 support. The GeForce GT 240’s GT215 graphics chip also packs 96 stream processors, two ROP units that can output eight pixels per clock, the ability to filter 32 texels per clock, and a 128-bit memory interface that can handle up to 1GB of memory. Default iterations of the card run with a 550MHz core speed, 1340MHz shader clock, and 3400MT/s GDDR5 memory.

For this round of tests, we’ve stacked up the Radeon HD 5670 against Zotac’s GeForce GT 240 512MB AMP! Edition, which comes out of the box with a non-negligible factory “overclock”—its GPU, shaders, and memory run at 600MHz, 1460MHz, and 4,000MT/s, respectively. This card currently sells for $98.99 before shipping at Newegg, so it’s taking the new Radeon head-on.

That’s not the whole story, though. Shortly after we finished running our benchmarks, Nvidia told us it had cut the price of its GeForce 9800 GT to $99 and pulled the vanilla GeForce GT 240 to $89. That means, the firm argued, that the GeForce 9800 GT is the most direct competitor to the 5670. We’re skeptical of that claim. As you can see in the pictures above and will see on the next page, the GeForce GT 240 and the Radeon HD 5670 have quite a bit in common, including 40-nm GPUs of similar sizes, 128-bit memory interfaces connected to GDDR5 memory, and no need for an external power input. The GeForce 9800 GT is a much older graphics card based on a much larger, 55-nm graphics chip; it has a wider memory interface coupled with slower GDDR3 RAM, a larger cooler and PCB, and generally requires an auxiliary power input. Also, the 9800 GT supports only DX10, while the GeForce GT 240 supports DX10.1 and the 5670 supports DX11. These three cards might briefly overlap in price here at the end of the 9800 GT’s run, but they are not truly comparable offerings, neither in capability nor in cost to produce.

Even if we did buy Nvidia’s argument, we didn’t have time to go back and add the 9800 GT to our comparison, since we only had two days to test and put this article together.

We have included the 9800 GT in our specs table, should you wish to see how it compares on paper. Let us direct your attention to the GT 240 and 5670, though, for the main event.

  Peak
pixel
fill rate
(Gpixels/s)
Peak bilinear
texel
filtering
rate
(Gtexels/s)
Peak bilinear
FP16 texel
filtering
rate
(Gtexels/s)
Peak
memory
bandwidth
(GB/s)
Peak shader
arithmetic (GFLOPS)
Single-issue Dual-issue
GeForce 9500 GT 4.4 8.8 4.4 25.6 90 134
GeForce 9600 GT 10.4 20.8 10.4 57.6 208 312
GeForce 9800 GT 9.6 33.6 16.8 57.6 336 504
GeForce GT 240 4.4 17.6 8.8 54.4 257 386
Zotac GeForce GT 240 4.8 19.2 9.6 64.0 280 420
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4770 12.0 24.0 12.0 51.2 960
Radeon HD 5670 6.2 15.5 7.8 64.0 620
Radeon HD 5750 11.2 25.2 12.6 73.6 1008
Radeon HD 5770 13.6 34.0 17.0 76.8 1360

The Radeon HD 5670 and the GeForce GT 240 match up pretty closely to one another on paper. The new Radeon leads in peak theoretical shader power and memory bandiwdth, but the GT 240 has the edge in texture filtering capacity. Those of you who have been following the GPU architectures from these two firms in recent generations won’t be surprised to see such a split. Of course, how these things work out in practice doesn’t always match with the expectations set by such numbers.

Who’s got the smallest chip?

  Estimated
transistor
count
(Millions)
Approximate
die size
(mm²)
Fabrication
process node
G92b 754 256 55-nm TSMC
GT215 727 144 40-nm TSMC
Redwood 627 104 40-nm TSMC
Juniper 1040 166 40-nm TSMC
Cypress 2150 334 40-nm TSMC

Please note that the numbers in the table are somewhat approximate, since they’re culled from various sources. Below are pictures of the various GPUs sized up, again approximately, next to a quarter for reference.

As you can see below, Redwood and the GT215 aren’t too far off in terms of die size and transistor count (although Redwood is the leaner of the two). Cutting computing resources by half has enabled AMD to reduce the transistor count by about 413 million compared to Juniper, resulting in a 37% smaller die. Redwood ended up even smaller than AMD’s previous $99 GPU, the 133-mm² RV740.


GT215 and Redwood in a staring contest


The 9800 GT’s G92b chip


The GT215


Redwood


Juniper


Cypress

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
System bus QPI 6.4 GT/s (3.2GHz)
Motherboard Gigabyte EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015
Matrix Storage Manager 8.9.0.1023
Memory size 6GB (3 DIMMs)
Memory type Corsair Dominator TR3X6G1600C8D
DDR3 SDRAM
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated ICH10R/ALC889A
with Realtek 6.0.1.5919 drivers
Graphics AMD Radeon HD 5670 512MB
with Catalyst 8.69-091211a-093208E drivers
Zotac GeForce GT 240 512MB AMP! Edition
with ForceWare 195.81 beta drivers
Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM
OS updates DirectX March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Call of Duty: Modern Warfare 2
To test this game, we played through the first 60 seconds of the “Wolverines!” level while recording frame rates with FRAPS. We tried to play pretty much the same way each time, but doing things manually like this will naturally involve some variance, so we conducted five test sessions per GPU config. We then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2‘s image quality settings maxed out, with 4X antialiasing enabled, as well.

The newest Radeon looks to be off to a promising start in Modern Warfare 2, handily beating our hot-clocked GeForce GT 240 and delivering very playable frame rates at 1680×1050 with all the eye candy on. Definitely not bad for a $99 GPU.

Borderlands
We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested at 1366×768, 1680×1050, and 1920×1080 resolutions with all of the in-game quality options at their max, recording three playthroughs for each resolution. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

Incidentally, we should probably point out a little snag we ran into with our lowest test resolution. 1366×768 is quickly taking over the realm of low-cost 16:9 displays, but Nvidia’s drivers don’t seem to support that resolution properly—our GeForce GT 240 ran Borderlands and our other games at 1360×768, instead. Those extra six horizontal lines probably don’t make much of a difference (we are, after all, looking at a minuscule 0.4% drop from 1.049 to 1.044 megapixels), but do keep it in mind. If the difference threatens to cause your OCD to flare-up, just ignore those results and concentrate on the higher resolutions, instead.

The GeForce GT 240 pulls ahead here, scoring a small but solid victory over the Radeon HD 5670 at our two lowest resolutions.

Things get a little bit more complicated at 1920×1080: the GeForce GT 240 somehow produced much lower minimum frame rates than its competitor, despite a higher average. You’ll probably want to turn the resolution down to avoid choppiness in fast-paced firefights with either of these cards.

Left 4 Dead 2
In Left 4 Dead 2, we got our data by recording and playing back a custom timedemo comprised of several minutes of gameplay.

The 5670 gets back on top here, and by quite a big margin.

To tell you the truth, though, even 51 FPS should be plenty smooth for zombie-killing frenzies. AMD’s card might let you crank the resolution up to 2560×1600, but we don’t imagine too many folks capable of affording a $1,000+ 30″ monitor will buy a $99 graphics card to start with.

DiRT 2
This excellent new racer packs a nicely scriptable performance test. We tested at the game’s “high” quality presets with 4X antialiasing in both DirectX 9 and DirectX 11 modes (DiRT 2 appears to lacks a DX10 mode). For our DirectX 11 testing, we enabled tessellation on the crowd and water.

Chalk up another win for the AMD camp. On the Radeon, pushing the resolution from 1680×1050 to 1920×1080 barely impacts frame rates and keeps the minimum FPS at a very respectable 33. You can also enable DX11 effects, although that induces a sizable performance hit. The tessellated crowd probably deserves most of the blame there.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution. We have a broader set of results here because we’ve included those from our Radeon HD 5970 review, which were themselves largely composed of data from our Radeon HD 5700-series review. Although we used older drivers for most of the cards in that review, we don’t expect that to affect power consumption, noise, or GPU temperatures substantially.

Interestingly, while the Radeon HD 5670 has a smaller GPU, it draws a teeny bit more power than our factory “overclocked” GeForce GT 240. Still, this is a very close race.

Noise levels
We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Under load, the Radeon HD 5670 is the quietest contestant in our test bench by far. (The Radeon HD 5870 has kept that crown in the idle test.) Meanwhile, the Zotac GeForce GT 240’s fan seems to spin away at the same speed regardless of what you’re doing with your computer, producing quite a bit more noise in the process.

GPU temperatures
We used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card.

Ah, so those blissfully low load noise levels on the Radeon come with a trade-off. The 5670 sees its temperature shoot past the GeForce GT 240 and even the Radeon HD 5770 under load, settling at a toasty 71°C. We’ve seen far worse, though, especially from AMD’s last generation of cards.

Conclusions
AMD has succeeded in bringing DirectX 11 under the $100 mark with the Radeon HD 5670, and for the money, this is a very capable little graphics card. Aside perhaps from Borderlands, the 5670 ran all of our games smoothly at 1680×1050 with antialiasing and detail levels cranked up. Gamers on tight budgets shouldn’t require much more performance than that. Also, because it’s based on AMD’s latest architecture, this newcomer may perform better than its predecessors in GPU-compute applications—another selling point that could gain importance in the not-too-distant future.

Compared to the GeForce GT 240, the most similar design Nvidia offers right now, the new Radeon looks to be faster overall. Sometimes by quite a bit.

Nvidia’s recent pricing moves do put the Radeon HD 5670 up against the GeForce 9800 GT, as well. As we noted earlier, the 9800 GT likely offers higher overall performance, but it has the downside of being older, larger, hungrier for power—and, since it’s a 55-nm, DX10-only part, probably not long for this world.

If you want higher performance without having to compromise on the feature set, we would direct you another Radeon instead. For a little over $50 more, AMD offers the Radeon HD 5770 1GB, a considerably faster product capable of running every new game out there at a 1080p resolution. Not a bad trade up for about the price of a game. That price difference might make the 5770 too big a leap for the most cash-strapped among us, and casual gamers probably shouldn’t bother. Still, anyone with a 1080p monitor really ought to consider stepping up.

As a side note, the relative strength of the Radeon HD 5670 bodes well for AMD’s new, notebook-bound Mobility Radeon HD 5700 and 5600 GPUs. AMD based those parts on the same Redwood chip, although it lowered speeds slightly to 650MHz for the GPU and 3.2Gbps for the memory. Nevertheless, Redwood’s mobile incarnation seems like it could deliver great gaming performance at notebook-friendly resolutions, hopefully without forcing users to lug around a big, bulky laptop.

Latest News

Ethereum ETH's Potential Rebound After Hitting Target Low
Crypto News

Ethereum ETH’s Potential Rebound After Hitting Target Low

Bitcoin (BTC) Coming Back Strong and Might Reach $200,000 Post-Halving
Crypto News

Bitcoin (BTC) Coming Back Strong and Might Reach $200,000 Post-Halving

Recently, the entire cryptocurrency market bled amid intensifying conflict between Iran and Israel. Bitcoin plummeted below $60,000, reaching a low of $59,700. Altcoins followed the trend, with ETH dropping below $2,900....

Crypto analyst Predicts Bitcoin Consolidation and Identifies Altcoin Bottom
Crypto News

Crypto analyst Predicts Bitcoin Consolidation and Identifies Altcoin Bottom

Popular crypto analyst Michael van de Poppe shared his predictions on Bitcoin performance as the halving approaches. According to the analyst, BTC will likely consolidate at its current price to...

Cardano Founder Celebrates Blockchain's Cost-Effectiveness
Crypto News

Cardano Founder Celebrates Blockchain’s Cost-Effectiveness

memecoin base blockchain
Crypto News

New Meme Coin on BASE Blockchain Has the Potential to Make Millionaires

28 Google Employees Fired for Protesting Against The Company’s Israeli Contract
News

28 Google Employees Fired for Protesting Against The Company’s Israeli Contract

Statistics

90+ Jaw-Dropping Squarespace Statistics of 2024 You Must See