Home Diamond’s Radeon HD 2900 XT 1GB graphics card
Reviews

Diamond’s Radeon HD 2900 XT 1GB graphics card

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.
IF YOU’VE BEEN PAYING attention, you probably already know that AMD elected not to reach for the overall graphics performance crown when it introduced its new generation of Radeons, including the flagship Radeon HD 2900 XT. Instead, the company chose to target the $399 mark, where the 2900 XT would compete against the GeForce 8800 GTS. This was a surprising move, indeed—practically without precedent in the modern GPU race.

AMD did it for solid reasons, of course, including the fact that its new graphics processor wasn’t quite up to the task of taking on the GeForce 8800 GTX head to head. That makes the product we’re reviewing today all the more interesting, because it’s a souped-up version of the Radeon HD 2900 XT with a full gig of screaming fast GDDR4 memory. AMD has practically kept mum on this product, choosing not to point out its existence or highlight it in any way. But Diamond and a handful of other AMD partners have been shipping the cards to PC makers for a few weeks now, and contrary to the initial plan, cards are quietly beginning to show up at online retailers, as well.

This card, with a staggering 128 GB/s of memory bandwidth, raises a number of intriguing questions about the role of memory bandwidth, the question of graphics memory size (how much is enough?) and most of all, the potential of AMD’s R600 GPU. Some time has passed since the 2900 XT’s debut, drivers have had time to mature, and here we have a faster version of the card. Can this new 2900 XT take on Nvidia’s best? Are the matchups altered in DirectX 10 games? And how does the UVD-less 2900 XT really perform in HD video playback? Read on for all of these answers and more.

The card
You wouldn’t know this Radeon HD 2900 XT was anything special by looking at it. It looks the same as the original 512MB version, right down to the cheesy metallic flame job on its cooler. Underneath that cooler, though, lies an array of GDDR4 memory chips with double the density of the GDDR3 chips on the original 2900 XT. GDDR4 first debuted on the Radeon X1950 XTX, and at that time, ATI claimed this new RAM type could achieve higher clock speeds while drawing less power per clock cycle. We were somewhat surprised, then, to see Nvidia introduce a while new lineup of world-beating graphics cards without using GDDR4, and we were even more startled when AMD didn’t lead off with a GDDR4 version of the 2900 XT.

That card is here at last, though, and Diamond is planning two versions: one with a stock 743MHz core clock and 2GHz memory and another, “overclocked” variant with an 825MHz core and 2.1GHz memory. Both offer quite a bit more memory throughput than the original 2900 XT, whose GDDR3 memory runs at 825MHz; over its 512-bit memory interface, the GDDR3 2900 XT has a theoretical peak of 106 GB/s. Our 2900 XT 1GB GDDR4 card is the slower of the two versions, but it still has 128 GB/s of memory bandwidth, well above the stock 2900XT and astoundingly, almost twice the 86 GB/s of the GeForce 8800 GTX.

As I’ve mentioned, AMD initially planned to sell these cards exclusively through system builders, who would presumably stuff them into $5000 PCs painted metallic green. Fortunately, though, individual cards are already starting to show up at online vendors, and Diamond expects its branded cards to be in stock at Newegg soon. Street prices look to be between $489 and $549, which makes this card a fairly direct competitor to the GeForce 8800 GTX.

Oh, yeah. I took some pictures.

Here’s a look at the funky eight-pin power plug on the 2900 XT. This plug will accept a six-pin connector, but AMD’s Overdrive overclocking utility isn’t available unless an eight-pin power plug is connected. I’ve never seen an adapter to covert another connector type to an eight-pin one, either, so you’re probably looking at a power supply upgrade if you want to overclock this card.

These are the “native” CrossFire connectors on the 2900 XT. In case you didn’t see the memo, dongles aren’t needed anymore.

Fresh competition
Nvidia hasn’t been sitting still as AMD has released its new products. Fortunately, this review gives us a chance to catch up on some new developments from the GeForce folks, as well. Among them is this EVGA GeForce 8800 GTS “Superclocked” card:

We took a brief look at this card in our initial Radeon HD 2900 XT review, and we said we’d follow up later. Now is the time. This card has a 575MHz core clock and 1.7GHz memory, a nice boost from “stock” GTS speeds. Of course, it’s fully warranted at these clock speeds. At $394 at Newegg, this is tough new competition for the GDDR3 version of the 2900 XT.

These monsters are GeForce 8800 Ultra cards from XFX. We were underwhelmed by the Ultra when it was introduced because it didn’t offer much of a speed boost over the 8800 GTX, yet it cost quite a bit more. We said at the time that higher-clocked versions of the card might bring some redemption for the Ultra and pledged to keep an eye on it. Well, here you have the GeForce 8800 Ultra “XXX” from XFX, clocked at a stratospheric 675MHz, with 2.3GHz memory and a 1667MHz shader core. Now that’s a little more like it. Furthermore, the price of even this stupid fast Ultra is down to $699.99 at ZipZoomFly. It ain’t cheap, but it’s certainly not the heart-stopping $830 price tag attached to Ultras at their launch, either. Perhaps a measure of redemption for the Ultra is, ahem, in the cards?

The matchup
Before we dive into our test results, let’s pause to have a look at the theoretical matchups between these cards.

Peak
pixel
fill rate
(Gpixels/s)
Peak texel
filtering
rate
(Gtexels/s)
Peak
memory
bandwidth
(GB/s)
Peak
shader
throughput
(MFLOPS)
GeForce 8800 GTS 10.0 12.0 64.0 346
EVGA GeForce 8800 GTS Superclocked 11.5 13.8 68.0 389
GeForce 8800 GTX 13.8 18.4 86.5 518
GeForce 8800 Ultra 14.7 19.6 103.7 576
XFX GeForce 8800 Ultra XXX 16.2 21.6 110.4 640
Radeon HD 2900 XT 11.9 11.9 105.6 475
Radeon HD 2900 XT 1GB GDDR4 11.9 11.9 128.0 475
Radeon HD 2900 XT 1GB GDDR4 OC 13.2 13.2 134.4 528

I don’t want to dwell too long on these numbers, because they’re just theoretical peaks, but such things do tend to matter in graphics. The big things to take away from this table are pretty obvious. The GeForce 8800 cards tend to beat out the Radeon HD 2900 XT cards in texture filtering rate—in some cases, like the 2900 XT 1GB GDDR4 vs. the 8800 GTX, by quite a bit. The GeForce 8800 is simply loaded on this front. Conversely, the various flavors of the Radeon HD 2900 XT tend to have vastly more memory bandwidth than the competition, as we’ve already noted.

Peak shader throughput is a tricky thing, but I’ll give you my take. The numbers above give Nvidia’s G80 GPU credit for a MUL instruction that it can co-issue in certain situations. If you take that away, which might be the more realistic thing to do, the G80’s numbers above would be reduced by a third. I think the peak numbers we have listed for the R600 are a little more solid, but this GPU’s real-world performance may be impaired by the superscalar nature of its execution units. Scheduling instructions optimally on such a machine can be challenging. The G80’s scalar architecture is more naturally efficient. My sense is that the R600 has more peak raw shader power, but that may not matter much in the end.

Now, let’s have a look at performance.

Test notes
Both AMD and Nvidia released new drivers during our testing period for this article, so we dealt with the change the best we could on a rolling basis.

For the Radeon cards, we began using the Catalyst 7.6 drivers, and then Cat 7.7 was released. The new drivers’ release notes didn’t promise much in the way of performance enhancements, but they did offer fixes for Lost Planet: Extreme Condition. We found that Lost Planet ran much faster with Cat 7.7, so we’ve included results using those drivers. Of our other DX10 apps, Company of Heroes was no faster with Cat 7.7 and Call of Juarez was slightly slower, so we stuck with our Cat 7.6 results. All video, power, and noise testing was conducted with Cat 7.7.

We started with ForceWare 158.45 on the GeForce cards, but employed the 163.11 beta release for testing Company of Heroes and Lost Planet, since the new drivers have specfic fixes for those games. Call of Juarez was no faster with the 163.11 drivers, so we retained our 158.45 results. All video, power, and noise testing was conducted with 163.11.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme X6800 2.93GHz Core 2 Extreme X6800 2.93GHz
System bus 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped)
Motherboard XFX nForce 680i SLI Asus P5W DH Deluxe
BIOS revision P26 1901
North bridge nForce 680i SLI SPP 975X MCH
South bridge nForce 680i SLI MCP ICH7R
Chipset drivers ForceWare 15.00 INF update 8.1.1.1010
Matrix Storage Manager 6.21
Memory size 4GB (4 DIMMs) 4GB (4 DIMMs)
Memory type 2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4 4
RAS to CAS delay (tRCD) 4 4
RAS precharge (tRP) 4 4
Cycle time (tRAS) 18 18
Command rate 2T 2T
Hard drive Maxtor DiamondMax 10 250GB SATA 150 Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce 680i SLI/ALC850
with Microsoft drivers
Integrated ICH7R/ALC882M
with Microsoft drivers
Graphics EVGA GeForce 8800 GTS OC 640MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual Radeon HD 2900 XT 512MB PCIe
with Catalyst 7.6 and 7.7 drivers
Dual EVGA GeForce 8800 GTS OC 640MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual Radeon HD 2900 XT 1GB GDDR4 PCIe
with Catalyst 7.6 and 7.7 drivers
MSI GeForce 8800 GTX 768MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual GeForce 8800 GTX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
XFX GeForce 8800 Ultra XXX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
Dual XFX GeForce 8800 Ultra XXX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
Radeon HD 2900 XT 512MB PCIe
with Catalyst 7.6 and 7.7 drivers
Radeon HD 2900 XT 1GB GDDR4 PCIe
with Catalyst 7.6 and 7.7 drivers
OS Windows Vista Ultimate x86 Edition Windows Vista Ultimate x86 Edition
OS updates

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Turbo-Cool 1KW-SR power supply units. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Lost Planet: Extreme Condition
Lost Planet is one of the first games to use DirectX 10, and it has a DirectX 9 mode, as well, so we can do some direct comparisons. DX9 and DX10 are more or less indistinguishable from one another visually in this game, as far as I can tell. They both look gorgeous, though. This is one very good looking game, with subtle HDR lighting and a nifty motion-blur effect.

We tested by manually playing through a specific portion of the game and recording frame rates with FRAPS. Each gameplay sequence lasted 90 seconds, and we recorded five separate sequences per graphics card. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

We tested with fairly high quality settings. The game’s various quality options were set to “high” with the exception of shadow quality and shadow resolution, which were set to “medium.” 4X antialiasing and 16X anisotropic filtering were enabled.

We did hit one snag in the DX10 version of Lost Planet. The Radeon HD cards didn’t appear to be applying anisotropic filtering to most surfaces, regardless of whether aniso was enabled in the game settings or forced on via Catalyst Control Center. Be aware of that as you’re looking at the results.

The move to more and faster memory doesn’t seem to help the Radeon HD 2900 XT much at all. The 1GB GDDR4 card is only slightly faster than the 512MB one, if at all. That leaves the 1GB just ahead of the GeForce 8800 GTS and trailing the GeForce 8800 GTX in the DX9 version of Lost Planet. The situation grows more acute in DX10, where the 1GB GDDR4 card falls behind the 8800 GTX. Every config we tested suffered a performance drop with DX10, though.

CrossFire isn’t effective with either version of DirectX. I tried setting Catalyst A.I. to “Advanced” in order to force on alternate frame rendering, but it didn’t help. The game still ran fine, but it wasn’t any faster. I also ran into an odd problem, with or without CrossFire, where Lost Planet’s main menu was exceptionally slow, to the point of being almost unusable, on the Radeons. This problem only seemed to affect the DX9 version of the game, for whatever reason.

Before we move on, I’d like to take a quick look at performance without antialiasing. I explained in my Radeon HD 2400 and 2600 series review that AMD’s new GPUs use their shader cores to handle the resolve stage of multisampled antialiasing, something we didn’t know when we first reviewed the 2900 XT. The question is: how does this limitation impact performance? I figured Lost Planet’s DX9 version would be an interesting place to have a look. Here’s what I found.

Lost Planet DX9
Average FPS
No AA
Lost Planet DX9
Average FPS
4X AA
Performance
penalty
Radeon HD 2900 XT 1GB 52.0 39.5 24%
GeForce 8800 GTX 50.1 47.1 6%

Without antialiasing, the Radeon HD 2900 XT 1GB GDDR4 is outright faster than the GeForce 8800 GTX, its direct competition. When we enable 4X AA, though, the Radeon’s performance dips precipitously, by 24%. By comparison, the 8800 GTX only suffers a 6% drop. Of course, we’re not likely to want to run without at least 4X AA when using a $400-500 graphics card, which is the whole problem. Store that away somewhere as you think about why the Radeon HD series hasn’t entirely lived up to its potential.

Company of Heroes
Company of Heroes can also use DirectX 10, and its developers have included a simple performance test in the game. This test plays through an introductory cinematic scene and reports the average and low frame rates. We tested with all of the game’s visual quality options maxed, along with 4X antialiasing. Physics detail was set to “Medium.”

The 1GB GDDR4 card isn’t much faster than its 512MB sibling here, either. If there’s a bright spot, it’s the median low frame rate for the 1GB GDDR4 card, which matches that of the GeForce 8800 Ultra. CrossFire is a bust in this test, as well.

Call of Juarez
Next up is the Call of Juarez DX10 demo and benchmark, created by the game’s developers, Techland. This test employs a number of the game engine’s DirectX 10 features, including waterfall particles that use geometry shaders, soft-edged vegetation, and advanced material shaders. Looks purty, too. We tested at 1920×1200 resolution with 4X antialiasing, “normal” shadow quality, and 2048×2048 shadow map resolution.

The Radeon HD 2900 XT cards look much better in this test, beating out the 8800 GTX. One reason for this result may be Techland’s controversial decision to bypass hardware-based MSAA resolve and force all DX10 cards to use their shaders. Nvidia has complained about this decision, though it does seem to have some technical warrant. Then again, the two previous DX10 games we tested were part of Nvidia’s “The Way it’s Mean to Be Played” program, so who knows? We’ve looked at three DX10 titles, and they’re all still disputed ground. Crysis can’t come soon enough. Let’s head back to DX9 and some well-established games.

The Elder Scrolls IV: Oblivion
For this test, we went with Oblivion’s default “ultra high quality” settings augmented with 4X antialiasing and 16X anisotropic filtering, both forced on via the cards’ driver control panels. HDR lighting was enabled. We strolled around the outside of the Leyawin city wall, as show in the picture below, and recorded frame rates with FRAPS. This area has loads of vegetation, some reflective water, and some long view distances.

Even at 2560×1600 with 4X AA and 16X aniso, the 1GB GDDR4 card has only the slightest lead on the 2900 XT 512MB.

Supreme Commander
Like many RTS and isometric-view RPGs, Supreme Commander isn’t exactly easy to test well, especially with a utility like FRAPS that logs frame rates as you play. Frame rates in this game seem to hit steady plateaus at different zoom levels, complicating the task of getting meaningful, repeatable, and comparable results. For this reason, we used the game’s built-in “/map perftest” option to test performance, which plays back a pre-recorded game.

I’ve omitted CrossFire results due to some compatibility problems with this game.

Once more, the 1GB GDDR4 card’s monster memory bandwidth isn’t much help.

HD video playback – H.264
Next up, we have some high-definition video playback tests. We’ve measured both CPU utilization and system-wide power consumption during playback using a couple of HD DVD movies with different encoding types. The first of those is Babel, a title encoded at a relatively high ~25 Mbps with H.264/AVC. We tested playback during a 100-second portion of Chapter 3 of this disc and captured CPU utilization with Windows’ perfmon tool. System power consumption was logged using an Extech 380803 power meter.

We conducted these tests at 1920×1080 resolution on the Radeon HD cards and at 1280×800 on the GeForce 8800 cards, since they don’t support HDCP with dual-link DVI and thus can’t play HD DVD movies at higher resolutions on our Dell 3007WFP display. (We should note that this limitation shouldn’t apply to single-link DVI connections, even at 1920×1200 resolution.) To confirm that scaling the movie down to lower resolutions wouldn’t put the GeForce cards at a disadvantage, I also did some testing with the Radeon HD 2900 XT at 1280×800 resolution and found that playing movies at the lower resolution didn’t have a significant impact on power use or CPU utilization.

Also, I disabled noise reduction on the GeForce 8800 cards for our HD DVD playback tests. Nvidia’s post-processing results in a net loss of image quality, so leaving it on didn’t make any sense. ATI’s noise reduction looks better, which is good, because it can’t be disabled via Catalyst Control Center. I did try the GeForce 8800 Ultra with noise reduction enabled, and the feature had only a minimal impact on power consumption and and CPU use.

CPU utilization is comparable among all of these high-end cards because none of them provide robust acceleration for HD video formats. We’ve seen CPU utilization numbers between 10 and 15% on the same test system with low-end cards that provide such acceleration capabilities.

Among the high-end cards, the GeForce 8800s use slightly less CPU time to play the movie. The 2900 XT cards draw less power during playback, though, and the 1GB GDDR4 version averages about 6 watts less than its GDDR3 counterpart.

HD video playback – VC-1
Unlike Babel, Peter Jackson’s version of King Kong is encoded in the VC-1 format that’s more prevalent among HD DVD movies right now. This disc is encoded at a more leisurely ~17 Mbps.

This one is a clear win for the GeForce 8800 cards on CPU utilization. The 2900 XT 1GB GDDR4 again draws the least power of the bunch during playback, though.

HD HQV video image quality
We’ve seen how these cards compare in terms of CPU utilization and power consumption during HD video playback, but what about image quality? That’s where the HD HQV test comes in. This HD DVD disc presents a series of test scenes and asks the observer to score the device’s performance in dealing with specific types of potential artifacts or image quality degradation. The scoring system is somewhat subjective, but generally, the differences are fairly easy to spot. If a device fails a test, it usually does so in obvious fashion. I conducted these tests at 1920×1080 resolution on all cards. Here’s how they scored.

GeForce
8800 GTS
GeForce
8800 GTX
GeForce
8800 Ultra
Radeon HD
2900 XT
Radeon HD
2900 XT
1GB GDDR4
HD noise reduction 20 20 20 20 20
Video resolution loss 20 20 20 20 20
Jaggies 10 10 10 20 20
Film resolution loss 0 0 0 25 25
Film resolution loss – Stadium 0 0 0 10 10
Total score 50 50 50 95 95

I don’t think the video post-processing features in Nvidia’s 163.11 drivers are 100% cooked for the GeForce 8800 lineup. We’ve seen them achieve nearly a perfect score in HQV on a GeForce 8600, but no such result here. I deducted points from the GeForce cards in the jaggies test due to some odd, intermittent corruption artifacts that, when present, were worse than jaggies themselves. I suppose a score of zero might be appropriate here, if you’re not feeling lenient. Also, the film resolution loss tests are firm fails.

All cards lost points in the HD noise reduction test for artifacts. The Nvidia cards had some funky temporal artifacts, and the Radeons had some obvious de-interlacing problems. Ultimately, the Radeon HD cards come out looking much better in the final score.

That’s not the whole story, though. I’ve found that Nvidia’s noise reduction algorithm can be effective in HQV, but it’s unfortunately a net negative when used with actual HD movies. The algorithm introduces color banding and other artifacts that really annoy me. Nvidia has a long way to go before its noise reduction produces a good HQV score and is something you’d want to use every day.

That said, in my experience, noise reduction and other post-processing techniques aren’t really necessary for most high-quality HD content like HD DVD movies. Those discs were mastered with a particular look to them, and applying post-processing filters to them isn’t really necessary. The video looks great, regardless. Taking out film grain may even sully the director’s intent. HQV tests tough cases where the video source has problems. If a card can overcome those, it’s done something special and worthwhile.

For me, the biggest issue of all is the GeForce 8800 cards’ inability to support HDCP over dual-link DVI, which effectively kills their ability to play back HD movies on our gorgeous digital HD display. I’d worry a lot more about that than I would about an HD HQV score.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Oblivion at 2560×1600 resolution with the game’s “ultra high quality” settings, 4X AA, and 16X anisotropic filtering. We loaded up the game and ran it in the same area where we did our performance testing.

Remember, we were forced to use a different motherboard for CrossFire testing, and that will affect system power consumption and noise levels.

The 1GB GDDR4 card again draws less power than its GDDR3 compatriot, and the GDDR4 card pulls fewer watts at idle than anything else in the field. The picture changes a little when running a game. The 2900 XT 1GB GDDR4 consumes just as much power as a GeForce 8800 GTX, but it generally offers lower performance than the GTX.

Incidentally, XFX’s high-clock version of the GeForce 8800 Ultra seems to be paying for its high clock speeds and killer performance at the wall socket. The SLI rig with Ultras pulls 586 watts under load. Yow.

Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the Zalman CNPS9500 LED we used to cool the CPU. The CPU cooler was set to run at a steady speed, 60% of its peak. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

For these tests, we had to swap in an OCZ GameXStream 700W power supply. The 1kW PC Power and Cooling unit offers astounding power, but it’s a little too loud to use during sound level testing.

I’ve said it before, and I’ll say it again. I’m pleased to see that my subjective impressions are confirmed by the decibel meter. The cooler on the 1GB GDDR4 card is noticeably louder than the one on our Radeon HD 2900 XT 512MB review unit. The 512MB card was an early sample, and I assume the 1GB GDDR4 card is more representative of production cards.

This thing is noisy, folks. There’s a nearly 10 decibel gap between the 8800 GTX and the 2900 XT 1GB GDDR4 when running a game. That may not look like much in the graph above, but believe me, it’s an awful lot when you’re in the same room with it.

Interestingly enough, all of the 8800 cards are sufficiently quiet that, even under load, you won’t hear much out of them. I’m pretty certain that what’s registering on the dB meter for those cards is the additional noise coming out of our power supply fan, especially when it’s straining under the load of an SLI system. Not so with the 2900 XT cards, which overwhelm the PSU fan with their own coolers’ noise.

Overclocking
I didn’t spend a lot of time overclocking this card, but I did want to see how it would perform at the 825MHz core and 1050MHz (or 2100MHz DDR) speeds of the “OC” version Diamond is planning to offer.

Not bad, but not enough to catch the GeForce 8800 GTX. The card did seem to handle these clock speeds just fine, though. I didn’t see any hint of instability or image artifacts when it was running at these speeds.

Conclusions
At the end of the day, the Radeon HD 2900 XT 1GB GDDR4 remains a really intriguing product. With a full gig of space, a 512-bit memory interface, screaming fast GDDR4 chips, and more bandwidth than Fruit of the Loom, this puppy is halfway to being the fastest—and most expensive—video card on earth. Yet the GPU onboard can’t seem to capitalize on the opportunity. The 1GB GDDR4 version of the Radeon HD 2900 XT wasn’t significantly faster than the original in any of the games we tested, including brand-new DX10 titles and new-ish DX9 titles running at very high resolutions and quality levels. That’s bad news for this video card, because it costs as much as GeForce 8800 GTX and performs more like a GeForce 8800 GTS—not the best value proposition.

This weakness is underscored by the fact that the 2900 XT doesn’t deliver lower CPU utilization than the GeForce 8800 series when playing back HD movies. AMD made it sound like the 2900 XT would benefit from UVD acceleration, but the GPU simply lacks that capability. In fact, the 2900 XT uses more CPU time during HD video playback than a competing GeForce 8800. AMD’s noise reduction and post-processing routines for HD video are superior, however.

Too bad that noise reduction doesn’t extend to the card’s cooler, which is just plain loud.

The 2900 XT does have some things to recommend it, including AMD’s nifty new tent filters for antialiasing. I remain convinced that the Radeon HD 2900 XT produces the highest quality images on the PC because of this feature. AMD just recently incorporated a new custom filter into its Catalyst 7.7 drivers that does an edge-detect pass and then selectively applies up to 24X antialiasing where needed. I played with that feature briefly while putting together this review, and I wasn’t impressed. The image quality is superb, but the performance hit is devastating. I expect this is only the sort of feature one would use in really old games, where performance is never an issue. I’ll have to play with the edge-detect filter more, but I think the tent filters are a better option overall.

Beyond that, most of what’s left for the Radeon HD 2900 XT 1GB GDDR is hope for a better future. Perhaps AMD will someday deliver new video drivers that produce dramatically higher performance. Perhaps the first wave of true DirectX 10-only titles will alter the landscape radically. Perhaps the graphics usage model will change over time to where the 2900 XT’s relatively weak edge AA and texture filtering capacity doesn’t matter so much. Any or all of these things could happen. Or perhaps not.

Latest News

Joint International Police Operation Disrupts LabHost
News

Joint International Police Operation Disrupts LabHost – A Platform That Supported 2,000+ Cybercriminals

Apple Removes WhatsApp and Threads From App Store In China
News

Apple Removes WhatsApp and Threads from Its App Store in China

On Friday Apple announced that it’s removing WhatsApp and Threads from its App Store in China over security concerns from the government. Adding further, Apple said it’s only doing its...

XRP Falls to $0.3 Amid Massive Weekend Sell-off - Can $1 Be Achieved Post-Halving?
Crypto News

XRP Falls to $0.3 Amid Massive Weekend Sell-off – Can $1 Be Achieved Post-Halving?

The crypto market is sinking lower, moving away from its impressive Q1 peak of $2.86 trillion. Major altcoins like Ethereum have not been spared either, with investors facing losses from the...

Cardano Could Rally to $27 After Bitcoin Halving if Historical Performance
Crypto News

Cardano Could Rally to $27 After Bitcoin Halving Following a Historical Performance

Japanese Banking Firm Launches Passive Income Program for Shiba Inu
Crypto News

Japanese Banking Firm Launches Passive Income Program for Shiba Inu

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors
Crypto News

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors

Cisco Launches AI-Driven Security Solution 'Hypershield'
News

Cisco Launches AI-Driven Security Solution ‘Hypershield’